You are on page 1of 308

Front cover

IBM PowerVM
Live Partition Mobility
Explore the PowerVM Enterprise Edition
Live Partition Mobility

Move active and inactive partitions


between servers

Manage partition migration


with an HMC or IVM

John E Bailey
Thomas Prokop
Guido Somers

ibm.com/redbooks
International Technical Support Organization

IBM PowerVM Live Partition Mobility

March 2009

SG24-7460-01
Note: Before using this information and the product it supports, read the information in
“Notices” on page xv.

Second Edition (March 2009)

This edition applies to AIX Version 6.1, AIX 5L Version 5.3 TL7, HMC Version 7.3.2 or later, and
POWER6 technology-based servers, such as the IBM Power System 570 (9117-MMA) and the
IBM Power System 550 Express (8204-E8A).
© Copyright International Business Machines Corporation 2007, 2009. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
Contents

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Cross-system flexibility is the requirement . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Live Partition Mobility is the answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Hardware infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.2 Components involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.6 Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Combining mobility with other features . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7.1 High availability clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7.2 AIX Live Application Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Chapter 2. Live Partition Mobility mechanisms . . . . . . . . . . . . . . . . . . . . . 19


2.1 Live Partition Mobility components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.1 Other components affecting Live Partition Mobility . . . . . . . . . . . . . . 22
2.2 Live Partition Mobility prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 Capability and compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.2 Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.3 Migratability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Partition migration high-level workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Inactive partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

© Copyright IBM Corp. 2007, 2009. All rights reserved. iii


2.4.2 Validation phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.3 Migration phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4.4 Migration completion phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4.5 Stopping an inactive partition migration . . . . . . . . . . . . . . . . . . . . . . 31
2.5 Active partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.1 Active partition state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.2 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.3 Validation phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.4 Partition migration phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5.5 Migration completion phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.5.6 Virtual I/O Server selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.5.7 Source and destination mover service partitions selection . . . . . . . . 41
2.5.8 Stopping an active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.6 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.7 AIX and active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.8 Linux and active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Chapter 3. Requirements and preparation . . . . . . . . . . . . . . . . . . . . . . . . . 45


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2 Skill considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.3 Requirements for Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4 Live Partition Mobility preparation checks . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.5 Preparing the systems for Live Partition Mobility . . . . . . . . . . . . . . . . . . . 54
3.5.1 HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5.2 Logical memory block size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5.3 Battery power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.5.4 Available memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5.5 Available processors to support Live Partition Mobility . . . . . . . . . . . 58
3.6 Preparing the HMC for Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . 61
3.7 Preparing the Virtual I/O Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.1 Virtual I/O Server version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.7.2 Mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.7.3 Synchronize time-of-day clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.8 Preparing the mobile partition for mobility . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.8.1 Operating system version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.8.2 RMC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.8.3 Disable redundant error path reporting . . . . . . . . . . . . . . . . . . . . . . . 68
3.8.4 Virtual serial adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.8.5 Partition workload groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.8.6 Barrier-synchronization register . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.8.7 Huge pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.8.8 Physical or dedicated I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8.9 Name of logical partition profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

iv IBM PowerVM Live Partition Mobility


3.8.10 Mobility-safe or mobility-aware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.8.11 Changed partition profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.9 Configuring the external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.10 Network considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.11 Distance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Chapter 4. Basic partition migration scenario . . . . . . . . . . . . . . . . . . . . . . 89


4.1 Basic Live Partition Mobility environment . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.1.1 Minimum requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.1.2 Inactive partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.1.3 Active partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2 Virtual IO Server attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2.1 Mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2.2 Virtual Asynchronous Services Interface device . . . . . . . . . . . . . . . . 93
4.2.3 Time reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.3 Preparing for an active partition migration. . . . . . . . . . . . . . . . . . . . . . . . . 94
4.3.1 Enabling the mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.3.2 Enabling the Time reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.4 Migrating a logical partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.4.1 Performing the validation steps and eliminating errors . . . . . . . . . . . 99
4.4.2 Inactive or active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.4.3 Migrating a mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Chapter 5. Advanced topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119


5.1 Dual Virtual I/O Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.1.1 Dual Virtual I/O Server and client mirroring. . . . . . . . . . . . . . . . . . . 121
5.1.2 Dual Virtual I/O Server and multipath I/O . . . . . . . . . . . . . . . . . . . . 124
5.1.3 Single to dual Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.2 Multiple concurrent migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.3 Dual HMC considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.4 Remote Live Partition Mobility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.4.1 Requirements for remote migration. . . . . . . . . . . . . . . . . . . . . . . . . 132
5.4.2 HMC considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.4.3 Remote validation and migration. . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.4.4 Command-line interface enhancements . . . . . . . . . . . . . . . . . . . . . 145
5.5 Multiple shared processor pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.5.1 Shared processor pools in migration and validation GUI . . . . . . . . 147
5.5.2 Processor pools on command line . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6 Migrating a partition with physical resources. . . . . . . . . . . . . . . . . . . . . . 149
5.6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.6.2 Configure a Virtual I/O Server on the source system . . . . . . . . . . . 150
5.6.3 Configure a Virtual I/O Server on the destination system . . . . . . . . 152
5.6.4 Configure storage on the mobile partition . . . . . . . . . . . . . . . . . . . . 153

Contents v
5.6.5 Configure network on the mobile partition. . . . . . . . . . . . . . . . . . . . 157
5.6.6 Remove adapters from the mobile partition . . . . . . . . . . . . . . . . . . 160
5.6.7 Ready to migrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.7 The command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.7.1 The migrlpar command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.7.2 The lslparmigr command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.7.3 The lssyscfg command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.7.4 The mkauthkeys command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.7.5 A more complex example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.8 Migration awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.9 Making applications migration-aware . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.9.1 Migration phases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5.9.2 Making programs migration aware using APIs . . . . . . . . . . . . . . . . 179
5.9.3 Making applications migration-aware using scripts . . . . . . . . . . . . . 182
5.10 Making kernel extension migration aware . . . . . . . . . . . . . . . . . . . . . . . 185
5.11 Virtual Fibre Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.11.1 Basic virtual Fibre Channel Live Partition Mobility preparation . . . 190
5.11.2 Migration of a virtual Fibre Channel based partition . . . . . . . . . . . 193
5.11.3 Dual Virtual I/O Server and virtual Fibre Channel multipathing. . . 195
5.11.4 Live Partition Mobility with Heterogeneous I/O . . . . . . . . . . . . . . . 198
5.12 Processor compatibility modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.12.1 Verifying the processor compatibility mode of mobile partition . . . 208

Chapter 6. Migration status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213


6.1 Progress and reference code location. . . . . . . . . . . . . . . . . . . . . . . . . . . 214
6.2 Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6.3 A recovery example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 221


7.1 Migration types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7.2 Requirements for Live Partition Mobility on IVM . . . . . . . . . . . . . . . . . . . 222
7.3 How active Partition Mobility works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
7.4 How inactive Partition Mobility works . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
7.5 Validation for active Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
7.6 Validation for inactive Partition Mobility. . . . . . . . . . . . . . . . . . . . . . . . . . 231
7.7 Preparation for partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7.7.1 Preparing the source and destination servers. . . . . . . . . . . . . . . . . 232
7.7.2 Preparing the management partition for Partition Mobility . . . . . . . 238
7.7.3 Preparing the mobile partition for Partition Mobility. . . . . . . . . . . . . 239
7.7.4 Preparing the virtual SCSI configuration for Partition Mobility . . . . 244
7.7.5 Preparing the virtual Fibre Channel configuration . . . . . . . . . . . . . . 248
7.7.6 Preparing the network configuration for Partition Mobility . . . . . . . . 253
7.7.7 Validating the Partition Mobility environment . . . . . . . . . . . . . . . . . 257

vi IBM PowerVM Live Partition Mobility


7.7.8 Migrating the mobile partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

Appendix A. Error codes and logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259


SRCs, current state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
SRC error codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
IVM source and destination systems error codes . . . . . . . . . . . . . . . . . . . . . 262
Operating system error logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

Contents vii
viii IBM PowerVM Live Partition Mobility
Figures

1-1 Hardware infrastructure enabled for Live Partition Mobility. . . . . . . . . . . . . 9


1-2 A mobile partition during migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1-3 The final configuration after a migration is complete. . . . . . . . . . . . . . . . . 11
1-4 Migrating all partitions of a system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1-5 AIX Workload Partition example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2-1 Live Partition Mobility components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2-2 Inactive migration validation workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2-3 Inactive migration state flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2-4 Inactive migration workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2-5 Active migration validation workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2-6 Migration phase of an active migration . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2-7 Active migration partition state transfer path . . . . . . . . . . . . . . . . . . . . . . . 37
3-1 Activation of Enterprise Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3-2 Enter activation code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3-3 Checking the current firmware level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3-4 Checking and changing LMB size with ASMI . . . . . . . . . . . . . . . . . . . . . . 55
3-5 Checking the amount of memory of the mobile partition. . . . . . . . . . . . . . 56
3-6 Available memory on destination system . . . . . . . . . . . . . . . . . . . . . . . . . 57
3-7 Checking the number of processing units of the mobile partition . . . . . . . 59
3-8 Available processing units on destination system . . . . . . . . . . . . . . . . . . . 60
3-9 Checking the version and release of HMC . . . . . . . . . . . . . . . . . . . . . . . . 61
3-10 Upgrading the Hardware Management Console. . . . . . . . . . . . . . . . . . . 62
3-11 Install Corrective Service to upgrade the HMC . . . . . . . . . . . . . . . . . . . . 63
3-12 Enabling mover service partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3-13 Synchronizing the time-of-day clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3-14 Disable redundant error path handling . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3-15 Verifying the number of serial adapters on the mobile partition . . . . . . . 70
3-16 Disabling partition workload group - Other tab . . . . . . . . . . . . . . . . . . . . 71
3-17 Disabling partition workload group - Settings tab . . . . . . . . . . . . . . . . . . 72
3-18 Checking the number of BSR arrays on the mobile partition . . . . . . . . . 73
3-19 Setting number of BSR arrays to zero . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3-20 Checking if huge page memory equals zero . . . . . . . . . . . . . . . . . . . . . . 75
3-21 Setting Huge Page Memory to zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3-22 Checking if there are required resources in the mobile partition . . . . . . . 77
3-23 Logical Host Ethernet Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3-24 Virtual SCSI client adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3-25 Virtual SCSI server adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3-26 Checking free virtual slots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

© Copyright IBM Corp. 2007, 2009. All rights reserved. ix


3-27 The Virtual SCSI Topology of the mobile partition . . . . . . . . . . . . . . . . . 86
4-1 Basic Live Partition Mobility configuration . . . . . . . . . . . . . . . . . . . . . . . . . 90
4-2 Hardware Management Console Workplace . . . . . . . . . . . . . . . . . . . . . . 95
4-3 Create LPAR Wizard window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4-4 Changing the Virtual I/O Server partition property . . . . . . . . . . . . . . . . . . 97
4-5 Enabling the Mover service partition attribute . . . . . . . . . . . . . . . . . . . . . . 97
4-6 Enabling the Time reference attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4-7 Validate menu on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4-8 Selecting the Remote HMC and Destination System . . . . . . . . . . . . . . . 101
4-9 Partition Validation Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4-10 Partition Validation Warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4-11 Validation window after validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4-12 System environment before migrating . . . . . . . . . . . . . . . . . . . . . . . . . 104
4-13 Migrate menu on the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4-14 Migration Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4-15 Specifying the profile name on the destination system . . . . . . . . . . . . . 107
4-16 Optionally specifying the Remote HMC of the destination system . . . . 108
4-17 Selecting the destination system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4-18 Sample of Partition Validation Errors/Warnings . . . . . . . . . . . . . . . . . . 110
4-19 Selecting mover service partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4-20 Selecting the VLAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4-21 Selecting the virtual SCSI adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4-22 Specifying the shared processor pool . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4-23 Specifying wait time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4-24 Partition Migration Summary panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4-25 Partition Migration Status window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4-26 Migrated partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5-1 Dual VIOS and client mirroring to dual VIOS before migration . . . . . . . . 122
5-2 Dual VIOS and client mirroring to dual VIOS after migration . . . . . . . . . 123
5-3 Dual VIOS and client mirroring to single VIOS after migration . . . . . . . . 124
5-4 Dual VIOS and client multipath I/O to dual VIOS before migration . . . . . 125
5-5 Dual VIOS and client multipath I/O to dual VIOS after migration . . . . . . 126
5-6 Single VIOS to dual VIOS before migration . . . . . . . . . . . . . . . . . . . . . . 127
5-7 Single VIOS to dual VIOS after migration . . . . . . . . . . . . . . . . . . . . . . . . 128
5-8 Live Partition Mobility infrastructure with two HMCs . . . . . . . . . . . . . . . . 133
5-9 Live Partition Mobility infrastructure using private networks . . . . . . . . . . 134
5-10 One public and one private network migration infrastructure . . . . . . . . 135
5-11 Network ping successful to remote HMC . . . . . . . . . . . . . . . . . . . . . . . 136
5-12 HMC option for remote command execution. . . . . . . . . . . . . . . . . . . . . 137
5-13 Remote command execution window . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5-14 Remote migration information entered for validate task . . . . . . . . . . . . 139
5-15 Validation window after destination system refresh . . . . . . . . . . . . . . . 140
5-16 Validation window after validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

x IBM PowerVM Live Partition Mobility


5-17 Local HMC environment before migrating. . . . . . . . . . . . . . . . . . . . . . . 142
5-18 Remote HMC selection window in Migrate task . . . . . . . . . . . . . . . . . . 143
5-19 Remote migration summary window . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
5-20 Remote HMC view after remote migration success . . . . . . . . . . . . . . . 145
5-21 Shared processor pool selection in migration wizard . . . . . . . . . . . . . . 147
5-22 Shared processor pool selection in Validate task . . . . . . . . . . . . . . . . . 148
5-23 The mobile partition is using physical resources. . . . . . . . . . . . . . . . . . 149
5-24 The source Virtual I/O Server is created and configured . . . . . . . . . . . 151
5-25 The destination Virtual I/O Server is created and configured . . . . . . . . 153
5-26 The storage devices are configured on the mobile partition . . . . . . . . . 154
5-27 The root volume group extends on to virtual disks . . . . . . . . . . . . . . . . 155
5-28 The root volume group of the mobile partition is on virtual disks only. . 156
5-29 The mobile partition has a virtual network device created . . . . . . . . . . 158
5-30 The mobile partition has unconfigured its physical network interface . . 159
5-31 The mobile partition with only virtual adapters . . . . . . . . . . . . . . . . . . . 161
5-32 The mobile partition on the destination system . . . . . . . . . . . . . . . . . . . 162
5-33 Basic NPIV virtual Fibre Channel infrastructure before migration . . . . . 187
5-34 Basic NPIV virtual Fibre Channel infrastructure after migration . . . . . . 188
5-35 Client partition virtual Fibre Channel adapter WWPN properties . . . . . 190
5-36 Virtual Fibre Channel adapters in the Virtual I/O Server . . . . . . . . . . . 191
5-37 Virtual I/O Server Fibre Channel adapter properties. . . . . . . . . . . . . . . 191
5-38 Selecting the virtual Fibre Channel adapter . . . . . . . . . . . . . . . . . . . . . 193
5-39 Virtual Fibre Channel migration summary window . . . . . . . . . . . . . . . . 194
5-40 Migrated partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5-41 Dual VIOS and client multipath I/O to dual NPIV before migration. . . . 196
5-42 Dual VIOS and client multipath I/O to dual VIOS after migration. . . . . . 197
5-43 The mobile partition using physical resources . . . . . . . . . . . . . . . . . . . 198
5-44 Virtual Fibre Channel server adapter properties . . . . . . . . . . . . . . . . . . 199
5-45 Virtual Fibre Channel client adapter properties . . . . . . . . . . . . . . . . . . . 200
5-46 The mobile partition using physical and virtual resources. . . . . . . . . . . 202
5-47 The mobile partition using virtual resources . . . . . . . . . . . . . . . . . . . . . 203
5-48 The mobile partition on the destination system . . . . . . . . . . . . . . . . . . . 204
5-49 Processor compatibility mode options of the mobile partition . . . . . . . . 209
5-50 Current processor compatibility mode of the mobile partition . . . . . . . . 210
6-1 Partition reference codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
6-2 Migration progress window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
6-3 Recovery menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
6-4 Recovery pop-up window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6-5 Interrupted active migration status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
7-1 Checking release level of the Virtual I/O Server . . . . . . . . . . . . . . . . . . . 223
7-2 More Tasks menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
7-3 Validation task for migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7-4 Checking LMB size with the IVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

Figures xi
7-5 Checking the amount of memory of the mobile partition. . . . . . . . . . . . . 234
7-6 Checking the amount of memory on the destination server . . . . . . . . . . 235
7-7 Checking the amount of processing units of the mobile partition . . . . . . 236
7-8 Checking the amount of processing units on the destination server . . . . 237
7-9 Enter PowerVM Edition key on the IVM. . . . . . . . . . . . . . . . . . . . . . . . . . 239
7-10 Processor compatibility mode on the IVM . . . . . . . . . . . . . . . . . . . . . . . 241
7-11 Checking the partition workload group participation . . . . . . . . . . . . . . . 243
7-12 Checking if the mobile partition has physical adapters . . . . . . . . . . . . . 244
7-13 View/Modify Virtual Fibre Channel window . . . . . . . . . . . . . . . . . . . . . . 249
7-14 Virtual Fibre Channel Partition Connections window . . . . . . . . . . . . . . 250
7-15 Partition selected shows Automatically generate . . . . . . . . . . . . . . . . . 250
7-16 Virtual Fibre Channel on source system . . . . . . . . . . . . . . . . . . . . . . . . 251
7-17 Virtual Fibre Channel on destination system. . . . . . . . . . . . . . . . . . . . . 252
7-18 Selecting physical adapter to be used as a virtual Ethernet bridge . . . 254
7-19 Create virtual Ethernet adapter on the mobile partition. . . . . . . . . . . . . 255
7-20 Create a virtual Ethernet adapter on the management partition . . . . . . 256
7-21 Partition is migrating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

xii IBM PowerVM Live Partition Mobility


Tables

1-1 PowerVM Live Partition Mobility Support . . . . . . . . . . . . . . . . . . . . . . . . . 16


3-1 Supported migration matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3-2 Preparing the environment for Live Partition Mobility . . . . . . . . . . . . . . . . 53
3-3 Virtual SCSI adapter worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5-1 Dynamic reconfiguration script commands for migration . . . . . . . . . . . . 183
5-2 Processor compatibility modes supported by server type . . . . . . . . . . . . 206
A-1 Progress SRCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
A-2 SRC error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
A-3 Source system generated error codes . . . . . . . . . . . . . . . . . . . . . . . . . . 262
A-4 Destination system generated error codes . . . . . . . . . . . . . . . . . . . . . . . 264
A-5 Operating system error log entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266

© Copyright IBM Corp. 2007, 2009. All rights reserved. xiii


xiv IBM PowerVM Live Partition Mobility
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurement may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

© Copyright IBM Corp. 2007, 2009. All rights reserved. xv


COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. These and other IBM trademarked
terms are marked on their first occurrence in this information with the appropriate symbol (® or ™),
indicating US registered or common law trademarks owned by IBM at the time this information was
published. Such trademarks may also be registered or common law trademarks in other countries. A current
list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:

AIX 5L™ i5/OS® PowerHA™


AIX® IBM® PowerVM™
BladeCenter® Parallel Sysplex® POWER®
DB2® POWER Hypervisor™ Redbooks®
Enterprise Storage Server® Power Systems™ Redpapers™
GDPS® POWER4™ Redbooks (logo) ®
Geographically Dispersed POWER5™ System p®
Parallel Sysplex™ POWER6+™ Tivoli®
GPFS™ POWER6® Workload Partitions Manager™
HACMP™ POWER7™

The following terms are trademarks of other companies:

SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and
other countries.

Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

xvi IBM PowerVM Live Partition Mobility


Preface

Live Partition Mobility is the next step in the IBMs Power Systems™ virtualization
continuum. It can be combined with other virtualization technologies, such as
logical partitions, Live Workload Partitions, and the SAN Volume Controller, to
provide a fully virtualized computing platform that offers the degree of system
and infrastructure flexibility required by today’s production data centers.

This IBM® Redbooks® publication discusses how Live Partition Mobility can help
technical professionals, enterprise architects, and system administrators:
򐂰 Migrate entire running AIX® and Linux® partitions and hosted applications
from one physical server to another without disrupting services and loads.
򐂰 Meet stringent service-level agreements.
򐂰 Rebalance loads across systems quickly, with support for multiple concurrent
migrations.
򐂰 Use a migration wizard for single partition migrations.

This book can help you understand, plan, prepare, and perform partition
migration on IBM Power Systems servers that are running AIX.

Note: Minor updates and technical corrections are marked by change bars
such as the ones in the left margin on this page. A 2010 update was made to
include POWER7™ servers.

The team that wrote this book


This book was produced by a team of specialists from around the world working
at the International Technical Support Organization (ITSO), Austin Center.

John E Bailey is a Staff Software Engineer working in the IBM Power Systems
Test Organization for IBM USA. He holds a degree in Computer Science from
Prairie View A&M University. He has seven years experience with IBM Power
Systems and has worked on Live Partition Mobility for three years. His areas of
expertise include AIX, Linux, Hardware Management Console, storage area
networks, PowerVM™ virtualization, and software testing.

© Copyright IBM Corp. 2007, 2009. All rights reserved. xvii


Thomas Prokop is a Consulting Certified IT Specialist working as a Field
Technical Sales Specialist in IBM US Sales & Distribution supporting clients and
IBM sales and Business Partners. He also provides pre-sales consultation and
implementation of IBM POWER® and AIX high-end system environments. He
has 18 years of experience with IBM Power Systems and has experience in the
fields of virtualization, performance analysis, PowerVM and complex
implementations.

Guido Somers is a Cross Systems Certified Senior Enterprise Infrastructure


Architect working for the IBM Global Technology Services organization in
Belgium. His focus is on server consolidation, IT optimization and virtualization.
He has 13 years of experience in the Information Technology field, ten years of
which were within IBM. He holds degrees in Biotechnology, Business
Administration, Chemistry, and Electronics, and did research in the field of
Theoretical Physics. His areas of expertise include AIX, Linux, system
performance and tuning, logical partitioning, virtualization, PowerHA™, SAN,
IBM Power Systems servers, and other IBM hardware offerings. He is an author
of many IBM Redbooks publications.

The authors of the first edition of the IBM System p® Live Partition Mobility
Redbook are:

Mitchell Harding, Narutsugu Itoh, Peter Nutt, Guido Somers, Federico Vagnini,
Jez Wain

The project that produced this document was managed by:

Scott Vetter, PMP

Thanks to the following people for their contributions to this project:

John E. Bailey, John Banchy, Kevin J. Cawlfield, Eddie Chen, Steven J. Finnes,
Matthew Harding, Mitchell P. Harding, Tonya L. Holt, David Hu,
Robert C. Jennings, Anil Kalavakolanu, Timothy Marchini, Josh Miers,
Timothy Piasecki, Steven E. Royer, Elizabeth A. Ruth, Maneesh Sharma,
Luc R. Smolders, John D. Spangenberg, Ravindra Tekumallah,
Vasu Vallabhaneni, Jonathan R. Van Niewaal, Dean S. Wilcox
IBM USA

Nigel A. Griffiths, James Lee, Chris Milsted, Dave Williams


IBM U.K.

Jun Nakano
IBM Japan

xviii IBM PowerVM Live Partition Mobility


Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbooks
publication dealing with specific products or solutions, while getting hands-on
experience with leading-edge technologies. You will have the opportunity to team
with IBM technical professionals, Business Partners, and Clients.

Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you will develop a network of contacts in IBM development labs, and
increase your productivity and marketability.

Find out more about the residency program, browse the residency index, and
apply online at:
http://www.ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our Redbooks to be as helpful as possible. Send us your comments


about this book or other Redbooks in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an e-mail to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Preface xix
xx IBM PowerVM Live Partition Mobility
1

Chapter 1. Overview
In this chapter, we provide an overview of Live Partition Mobility with a high-level
description of its features.

This chapter contains the following topics:


򐂰 1.1, “Introduction” on page 2
򐂰 1.2, “Partition migration” on page 3
򐂰 1.3, “Cross-system flexibility is the requirement” on page 3
򐂰 1.4, “Live Partition Mobility is the answer” on page 5
򐂰 1.5, “Architecture” on page 6
򐂰 1.6, “Operation” on page 12
򐂰 1.7, “Combining mobility with other features” on page 14

© Copyright IBM Corp. 2007, 2009. All rights reserved. 1


1.1 Introduction
Live Partition Mobility allows you to migrate partitions that are running AIX and
Linux operating systems and their hosted applications from one physical server
to another without disrupting the infrastructure services. The migration operation,
which takes just a few seconds, maintains complete system transactional
integrity. The migration transfers the entire system environment, including
processor state, memory, attached virtual devices, and connected users.

IBM Power Systems servers are designed to offer the highest stand-alone
availability in the industry. Enterprises must occasionally restructure their
infrastructure to meet new IT requirements. By letting you move your running
production applications from one physical server to another, Live Partition
Mobility allows for nondisruptive maintenance or modification to a system (to your
users). This mitigates the impact on partitions and applications formerly caused
by the occasional need to shut down a system.

Today, even small IBM Power Systems servers frequently host many logical
partitions. As the number of hosted partitions increases, finding a maintenance
window acceptable to all becomes increasingly difficult. Live Partition Mobility
allows you to move partitions around so that you can perform previously
disruptive operations on the machine at your convenience, rather than when it
causes the least inconvenience to the users.

Live Partition Mobility helps you meet increasingly stringent service-level


agreements (SLAs) because it allows you to proactively move running partitions
and applications from one server to another.

The ability to move running partitions from one server to another offers you the
ability to balance workloads and resources. If a key application’s resource
requirements peak unexpectedly to a point where there is contention for server
resources, you might move it to a more powerful server or move other, less
critical, partitions to different servers, and use the freed-up resources to absorb
the peak.

Live Partition Mobility may also be used as a mechanism for server


consolidation, because it provides an easy path to move applications from
individual, stand-alone servers to consolidation servers. If you have partitions
with workloads that have widely fluctuating resource requirements over time (for
example, with a peak workload at the end of the month or the end of the quarter)
you can use Live Partition Mobility to consolidate partitions to a single server
during the off-peak period, allowing you to turn off unused servers. Then move
the partitions to their own, adequately configured servers, just prior to the peak.
This approach also offers energy savings by reducing the power to run machines
and the power to keep them cool during off-peak periods.

2 IBM PowerVM Live Partition Mobility


Live Partition Mobility can be automated and incorporated into system
management tools and scripts. Support for multiple concurrent migrations allows
you to liberate system resources very quickly. For single-partition, point-in-time
migrations, the Hardware Management Console (HMC) and the Integrated
Virtualization Manager (IVM) interfaces offer easy-to-use migration wizards.

Live Partition Mobility contributes to the goal of continuous availability, as follows:


򐂰 Reduces planned down time by dynamically moving applications from one
server to another
򐂰 Responds to changing workloads and business requirements when you move
workloads from heavily loaded servers to servers that have spare capacity
򐂰 Reduces energy consumption by allowing you to easily consolidate workloads
and turn off unused servers

Live Partition Mobility is the next step in the IBM PowerVM continuum. It can be
combined with other virtualization technologies, such as logical partitions, Live
Workload Partitions, and the SAN Volume Controller to provide a fully virtualized
computing platform offering the degree of system and infrastructure flexibility
required by today’s production data centers.

1.2 Partition migration


A partition migration operation can occur either when a partition is powered off
(inactive), or when a partition is providing service (active).

During an active partition migration, there is no disruption of system operation or


user service. For example, a partition that is hosting a live production database
with normal user activities can be migrated to a second system with no loss of
data, no loss of connectivity, and no effect on the running transactions.

A logical partition may be migrated between two POWER6® (or later)


technology-based systems, if the destination system has enough resources to
host the partition. There is no restriction on processing units or partition memory
size, either for inactive or for active migration.

1.3 Cross-system flexibility is the requirement


Infrastructure flexibility has become a key criteria when designing and deploying
information technology solutions. Application requirements frequently change
and the hardware infrastructure they rely upon must be capable of adapting to

Chapter 1. Overview 3
the new requirements in a very short time, but also with minimal to no impact on
the service level. Configuration changes must be applied in a very simple and
secure way, with limited administrator intervention, to reduce change
management costs and the related risk.

The Advanced POWER Virtualization feature introduced in POWER5™-based


systems provides excellent flexibility capabilities within each system. The
virtualization of processor capacity and the granular distribution of memory,
combined with network and disk virtualization, enable administrators to create
multiple fine-grained logical partitions within a single system. Computing power
can be distributed among partitions automatically in real time, depending on real
application needs, with no user action. System configuration changes are made
by policy-based controls or by administrators with very simple and secure
operations that do not interrupt service.

Although single-system virtualization greatly improves the flexibility of an IT


solution, service requirements of clients often demand a more comprehensive
view of the entire infrastructure. In many instances, applications are distributed
across multiple systems ensuring isolation, optimization of global system
resources, and adaptability of the infrastructure to new workloads.

One of the most time consuming activities in a complex environment is the


transfer of a workload from one system to another. Although many reasons for
the migration exist, several reasons are:
򐂰 Resource balancing
A system does not have enough resources for the workload while another
system does.
򐂰 New system deployment
A workload running on an existing system must be migrated to a new, more
powerful one.
򐂰 Availability requirements
When a system requires maintenance, its hosted applications must not be
stopped and can be migrated to another system.

Without a way to migrate a partition, all these activities require careful planning
and highly skilled people, and often cause a significant downtime. In some cases,
an SLA may be so strict that planned outages are not tolerated.

4 IBM PowerVM Live Partition Mobility


1.4 Live Partition Mobility is the answer
The Live Partition Mobility function offered on POWER6 technology-based
systems is designed to enable the migration of an entire logical partition from one
system to another. Live Partition Mobility uses a simple and automated
procedure that transfers the configuration from source to destination without
disrupting the hosted applications or the setup of the operating system and
applications.

Live Partition Mobility provides the administrator a greater control over the usage
of resources in the data center. It allows a level of reconfiguration that in the past
was not possible because of complexity or just because of SLAs that do not allow
an application to be stopped for an architectural change.

Live Partition Mobility is a feature of the PowerVM Enterprise Edition offering.

The migration process can be performed either with a powered-off or a live


partition. The following two available migration types are discussed in more detail
in the next sections:
Inactive migration The logical partition is powered off and moved to the
destination system.
Active migration The migration of the partition is performed while service is
provided, without disrupting user activities.

1.4.1 Inactive migration


Inactive migration moves the definition of a powered-off logical partition from one
system to another along with its network and disk configuration. No additional
change in network or disk setup is required and the partition can be activated as
soon as migration is completed.

The inactive migration procedure takes care of the reconfiguration of involved


systems, as follows:
򐂰 A new partition is created on the destination system and with the same
configuration present on the source system.
򐂰 Network access and disk data is preserved and made available to the new
partition.
򐂰 On the source system, the partition configuration is removed and all involved
resources are freed.

If a partition is down because of scheduled maintenance or not in service for


other reasons, an inactive migration may be performed as long as the HMC can

Chapter 1. Overview 5
communicate to the source and destination servers, and their respective Virtual
I/O Servers. It is executed in a controlled way and with minimal administrator
interaction so that it can be safely and reliably performed in a very short time
frame.

When the service provided by the partition cannot be interrupted, its relocation
can be performed, with no loss of service, by using the active migration feature.

1.4.2 Active migration


By using active migration, a running partition is moved from a source system to a
destination system with no disruption of partition operation or user service.

An active migration performs the same operations as an inactive migration


except that the operating system, the applications, and the services they provide
are not stopped during the process. The physical memory content of the logical
partition is copied from system to system allowing the transfer to be
imperceptible to users.

During an active migration, the applications continue to handle their normal


workload. Disk data transactions, running network connections, user contexts,
and the complete environment are migrated without any loss and migration can
be activated any time on any production partition.

No limitation exists on a partition’s computing and memory configuration, and


multiple migrations can be executed concurrently. Both inactive and active
migrations may involve partitions with any processing unit and memory size
configuration.

1.5 Architecture
Live Partition Mobility requires a specific hardware infrastructure. Several
platform components are involved. Live Partition Mobility is controlled by the
Hardware Management Console (HMC) or the Integrated Virtualization Manager
(IVM). This section describes the HMC-based architecture. Chapter 7,
“Integrated Virtualization Manager for Live Partition Mobility” on page 221
describes the IVM-based Live Partition Mobility in detail.

6 IBM PowerVM Live Partition Mobility


1.5.1 Hardware infrastructure
The primary requirements for the migration of a logical partition are:
򐂰 Two POWER6 or POWER7 technology-based systems running PowerVM
Enterprise Edition with Virtual I/O Server version 1.5 or higher and controlled
by at least one HMC or each of them running the IVM are required.
An optional redundant HMC configuration is supported.
Remote migration between systems controlled by different HMCs running
Version 7 Release 3.4 or later is supported.
Live Partition Mobility is supported for partitions running AIX 5.3 Level
5300-07 or later, AIX 6.1 or later, Red Hat Enterprise Linux version 5 Update
1 or later, or SUSE Linux Enterprise Server 10 Service Pack 1 or later.
򐂰 The destination system must have enough processor and memory resources
to host the mobile partition (the partition profile that is running, as alternate
production profiles might exist).
򐂰 The operating system, applications, and data of the mobile partition must
reside on virtual storage on an external storage subsystem.
򐂰 No physical adapters may be used by the mobile partition during the
migration.
򐂰 Migration of partitions using multiple Virtual I/O Servers is supported.
򐂰 At the time of publication, migration of partitions between HMC and IVM
systems is not supported.
򐂰 The mobile partition’s network and disk access must be virtualized by using
one or more Virtual I/O Servers.
– The Virtual I/O Servers on both systems must have a shared Ethernet
adapter configured to bridge to the same Ethernet network used by the
mobile partition.
– The Virtual I/O Servers on both systems must be capable of providing
virtual access to all disk resources the mobile partition is using.
– The disks used by the mobile partition must be accessed through virtual
SCSI, virtual Fibre Channel-based mapping, or both.

Note: Virtual Fibre Channel support for migration is introduced in


PowerVM Virtual I/O Server Version 2.1. Virtual Fibre Channel uses
N_Port ID Virtualization (NPIV) to access SAN resources using shared
Fibre Channel adapters. See Chapter 2 in PowerVM Virtualization on
IBM System p: Managing and Monitoring, SG24-7590 for details about
virtual Fibre Channel and NPIV configuration.

Chapter 1. Overview 7
Live Partition Mobility requires a specific hardware and microcode configuration
that is currently available on POWER6 technology-based systems only.

The procedure that performs the migration identifies the resource configuration
of the mobile partition on the source system and then reconfigures both source
and destination systems accordingly. Because the focal-point of hardware
configuration is the HMC, it has been enhanced to coordinate the process of
migrating partitions.

The mobile partition’s configuration is not changed during the migration. The
destination system must be able to host the mobile partition and must have
enough free processor and memory resources to satisfy the partition’s
requirements before migration is started. No limitation exists on the size of the
mobile partition; it can even use all resources of the source system offered by the
Virtual I/O Server.

The operating system and application data must reside on external disks of the
source system because the mobile partition’s disk data must be available after
the migration to the destination system is completed. An external, shared-access
storage subsystem is therefore required.

The mobile partition must not own any physical adapters and must use the
Virtual I/O Server for both network and external disk access. External disks may
be presented to the mobile partition as virtual SCSI, or virtual Fibre resources, or
both.

Because the mobile partition’s external disk space must be available to the
Virtual I/O Servers on the source and destination systems, you cannot use
storage pools. Each Virtual I/O Server must create virtual target devices using
physical disks and not logical volumes.

Virtual network connectivity must be established before activating the partition


migration task, while virtual disk setup is performed by the migration process.

Both the source and the target system must have an appropriate shared Ethernet
adapter environment to host a moving partition. All virtual networks in use by the
mobile partition on the source system must be available as virtual networks on
the destination system.

VLANs defined by port virtual IDs (PVIDs) on the VIOS have no meaning outside
of an individual server as all packets are bridged untagged. It is possible for
VLAN 1 on CEC 1 to be part of the 192.168.1 network while VLAN 1 on CEC 2 is
part of the 10.1.1 network.

8 IBM PowerVM Live Partition Mobility


Because two networks are possible, you cannot verify whether VLAN 1 exists on
both servers. You have to check whether VLAN 1 maps to the same network on
both servers.

Figure 1-1 shows a basic hardware infrastructure enabled for Live Partition
Mobility and that is using a single HMC. Each system is configured with a single
Virtual I/O Server partition. The mobile partition has only virtual access to
network and disk resources. The Virtual I/O Server on the destination system is
connected to the same network and is configured to access the same disk space
used by the mobile partition. For illustration purposes, the device numbers are all
shown as zero, but in practice, they can vary considerably.

POWER6 System #1 POWER6 System #2


AIX Client Partition 1

hdisk0

vscsi0 ent0

Processor

Processor
POWER

Service

Service
VLAN POWER VLAN
Hypervisor Hypervisor

vhost0 ent1 ent1


virt virt
Virtual I/O Server

Virtual I/O Server


Ethernet – private network
vtscsi0
ent2 en2 en2 ent2
SEA if if SEA hdisk0
hdisk0
HMC
fcs0 ent0 ent0 fcs0

Ethernet Network

Storage Area Network

Storage
Subsystem LUN

Figure 1-1 Hardware infrastructure enabled for Live Partition Mobility

Chapter 1. Overview 9
The migration process creates a new logical partition on the destination system.
This new partition uses the destination’s Virtual I/O Server to access the same
mobile partition’s network and disks. During active migration, the state of the
mobile partition is copied, as shown in Figure 1-2.

POWER6 System #1 POWER6 System #2


AIX Client Partition 1 AIX Client Partition 1

hdisk0

vscsi0 ent0

Processor

Processor
POWER POWER

Service

Service
VLAN VLAN
Hypervisor Hypervisor

vhost0 ent1 ent1


virt virt
Virtual I/O Server

Virtual I/O Server


Ethernet – private network
vtscsi0
ent2 en2 en2 ent2
SEA if if SEA
hdisk0
HMC
fcs0 ent0 ent0 fcs0

Ethernet Network

Storage Area Network

Storage
Subsystem LUN

Figure 1-2 A mobile partition during migration

10 IBM PowerVM Live Partition Mobility


When the migration is complete, the source Virtual I/O Server is no longer
configured to provide access to the external disk data. The destination Virtual I/O
Server is set up to allow the mobile partition to use the storage. The final
configuration is shown in Figure 1-3.

POWER6 System #1 POWER6 System #2


AIX Client Partition 1

hdisk0

ent0 vscsi0

Processor

Processor
POWER

Service

Service
VLAN POWER VLAN
Hypervisor Hypervisor

ent1 ent1 vhost0


virt virt
Virtual I/O Server

Virtual I/O Server


Ethernet – private network
vtscsi0
ent2 en2 en2 ent2
SEA if if SEA
hdisk0 hdisk0
HMC
fcs0 ent0 ent0 fcs0

Ethernet Network

Storage Area Network

Storage
Subsystem LUN

Figure 1-3 The final configuration after a migration is complete

1.5.2 Components involved


The Live Partition Mobility function changes the configuration of the two involved
systems and, for active migration, manages the migration without interrupting the
service provided by the applications running on the mobile partition.

The migration manager function resides on the HMC and is in charge of


configuring both systems. It has the responsibility of checking that all hardware
and software prerequisites are met. It executes the required commands on the
two systems to complete migration while providing migration status to the user.

Note: HMC Version 7 Release 3.4 introduces remote migration, the option of
migrating partitions between systems managed by different HMCs. See 5.4,
“Remote Live Partition Mobility” on page 130 for details on remote migration

When an inactive migration is performed, the HMC invokes the configuration


changes on the two systems. During an active migration, the running state
(memory, registers, and so on) of the mobile partition is transferred during the
process.

Chapter 1. Overview 11
Memory management of an active migration is assigned to a mover service
partition on each system. During an active partition migration, the source mover
service partition extracts the mobile partition’s state from the source system and
sends it over the network to the destination mover service partition, which in turn
updates the memory state on the destination system.

Any Virtual I/O Server partition can be configured as a mover service partition.

Live Partition Mobility has no specific requirements on the mobile partition’s


memory size or the type of network connecting the mover service partitions. The
memory transfer is a process that does not interrupt a mobile partition’s activity
and might take time when a large memory configuration is involved on a slow
network. Use a high bandwidth connection, such as 1 Gbps Ethernet or larger.

1.6 Operation
Partition migration can be performed either as an inactive or an active operation.

This section describes HMC-based partition migration. See Chapter 7,


“Integrated Virtualization Manager for Live Partition Mobility” on page 221 for
details regarding IVM-based migration.

1.6.1 Inactive migration


The basic steps required for inactive migration are:
1. Prepare the mobile partition for migration, if required, such as removing
adapters that are not supported and ensuring that applications support
mobility. See 3.5, “Preparing the systems for Live Partition Mobility” on
page 54 for additional information.
2. Shut down the mobile partition if it is active.
3. Perform the migration validation procedure provided by the mobile partition’s
HMC to verify that the migration can be performed successfully.
4. Start the inactive partition migration using the HMC.
The HMC connects to both source and destination systems and performs the
migration steps, as follows:
a. It transfers the mobile partition’s configuration from source system to
destination system, including all partition profiles.
b. It updates the destination Virtual I/O Server to provide access of virtual
SCSI, virtual Fibre Channel, or both to the mobile partition’s disk
resources.

12 IBM PowerVM Live Partition Mobility


c. It updates the source Virtual I/O Server to remove resources used to
provide access of virtual SCSI, virtual Fibre Channel, or both to the mobile
partition’s disk resources.
d. It removes the mobile partition configuration on the source system.
5. When migration is complete, the mobile partition can be activated on the
destination system.

The steps executed are similar to those an administrator would follow when
performing a manual migration. These actions normally require accurate
planning and a system-wide knowledge of the configuration of the two systems
because virtual adapters and virtual target devices have to be created on the
destination system, following virtualization configuration rules.

The inactive migration task takes care of all planning and validation and performs
the required activities without user action. This mitigates the risk of human error
and executes the movement in a timely manner.

1.6.2 Active migration


The basic steps required for an active migration are:
1. Prepare the mobile partition for migration, keeping it active, such as removing
adapters that are not supported and ensuring that all running applications
support mobility. See 3.4, “Live Partition Mobility preparation checks” on
page 53 for additional information.
2. Perform the migration validation procedure provided by the mobile partition’s
HMC to verify that the migration can be performed successfully.
3. Initiate the active partition migration using the HMC.
The HMC connects to both source and destination systems and performs the
migration steps, as follows:
a. It transfers the mobile partition’s configuration from source system to
destination system, including all the partition profiles.
b. It updates the destination Virtual I/O Server to provide access of virtual
SCSI, virtual Fibre Channel, or both to the mobile partitions disk resources
c. It activates the mover service partition function on the source and
destination Virtual I/O Servers. The mover service partitions copy the
mobile partition’s state from the source to the destination system.
d. It updates the source Virtual I/O Server to remove resources used to
provide access of virtual SCSI, virtual Fibre Channel, or both to the mobile
partition’s disk resources.
e. It removes the mobile partition’s configuration on the source system.

Chapter 1. Overview 13
Active migration performs similar steps to inactive migration, but also copies
physical memory to the destination system. It keeps applications running,
regardless of the size of the memory used by the partition; the service is not
interrupted, the I/O continues accessing the disk, and network connections keep
transferring data.

1.7 Combining mobility with other features


Many ways exist to take advantage of Live Partition Mobility. It can be exploited to
perform actions that were not previously possible because of complexity or time
constraints. Migration is a function that can be combined with existing IBM Power
Systems features and software to provide better and more flexible service.

1.7.1 High availability clusters


An environment that has only small windows for scheduled downtime may use
Live Partition Mobility to manage many scheduled activities either to reduce
downtime through inactive migration or to avoid service interruption through
active migration.

For example, if a system has to be shut down because of a scheduled power


outage, its hosted partitions may be migrated to powered systems before power
is cut.

14 IBM PowerVM Live Partition Mobility


This case is shown in Figure 1-4, where system A has to be shut down. The
production database partition is actively migrated to system B, while the
production Web application partition is actively migrated to system C. The test
environment is not considered vital and is shut down during the outage.

POWER6 System B

POWER6 System A Database

Database
VIOS

Test Environment

Web Application POWER6 System C

Web Application
VIOS

VIOS

Figure 1-4 Migrating all partitions of a system

Live Partition Mobility is a reliable procedure for system reconfiguration and it


may be used to improve the overall system availability.

High availability environments also require the definition of automated


procedures that detect software and hardware events and activate recovery
plans to restart a failed service as soon as possible.

Live Partition Mobility increases global availability, but it is not a high availability
solution. It requires both source and destination systems be operational and that
the partition is not in a failed state. In addition, it does not monitor operating
system and application state and it is, by default, a user-initiated action.

Unplanned outages still require specific actions that are normally executed by
cluster solutions such as IBM PowerHA

Chapter 1. Overview 15
IBM PowerHA for AIX, also known as High Availability Cluster Multiprocessing
(HACMP™) for AIX, supports Live Partition Mobility for all IBM POWER6
technology-based servers. Table 1-1 provides support details.

Table 1-1 PowerVM Live Partition Mobility Support


PowerVM Live Partition AIX 5.3 AIX 6.1
Mobility support

PowerHA v5.3 HACMP # IZ07791 HACMP # IZ07791


AIX 5.3.7.1 AIX 6.1.0.1
RSCT 2.4.5.4 RSCT 2.5.0.0

PowerHA v5.4.1 HACMP # IZ02620 HACMP # IZ02620


AIX 5.3.7.1 AIX 6.1.0.1
RSCT 2.4.5.4 RSCT 2.5.0.0

PowerHA v5.5 AIX 5.3 TL9 AIX 6.1 TL2 SP1


RSCT 2.4.10.0 RSCT 2.5.2.0

Cluster software and Live Partition Mobility provide different functions that can be
used together to improve the availability and uptime of applications. They can
simplify administration, reducing the related cost.

1.7.2 AIX Live Application Mobility


AIX Version 6.1 allows you to group applications running on the same AIX 6
image, together with their disk data and network configuration. Each group is
called a workload partition.

Live Workload Partitions are migration-capable. Given two running AIX


Version 6.1 images that share a common file system, the administrator can
decide to actively migrate a workload partition between operating systems,
keeping the applications running.

Although Live Application Mobility is very similar to active Live Partition Mobility, it
is a pure AIX 6 function. It does not require any partition configuration change,
and it can be executed on any server running AIX Version 6.1, including
POWER7, POWER6, POWER5, and POWER4™ technology-based servers.
Live Application Mobility is a capability provided by PowerVM Workload Partitions
Manager™, and can function on all systems that support AIX Version 6.1,
although Live Partition Mobility is a PowerVM feature that works for AIX 5.3,
AIX 6.1 and Linux operating systems that operate on POWER6 or POWER7
technology-based servers.

Think of Live Application Mobility as a relocation, and Live Partition Mobility as a


migration.

16 IBM PowerVM Live Partition Mobility


The Workload Partition migration function does not require a configuration of
virtual devices in the source and destination systems. AIX keeps running on both
systems and continues to use its allocated resources. It is the system
administrator’s task to perform a dynamic partition reconfiguration operation to
reduce the footprint of the source partition and enlarge the destination partition.
Workload Partition migration also requires the destination partition to exist and
be running before it is started.

Figure 1-5 represents an example of Live Workload Partitions usage. System B


is a system with three different workloads. Each of them can be migrated to
another AIX Version 6.1 image even if they run on different hardware platforms.

POWER6 System B POWER6 System C

AIX 6 AIX 6

Test
POWER5 System A
AIX 6 Web
AIX 6
DB

Common Filesystem

Figure 1-5 AIX Workload Partition example

Live Partition Mobility and AIX Live Application Mobility have different scopes but
have similar characteristics. They can be used in conjunction to provide even
higher flexibility in a POWER6 or POWER7 environment.

Chapter 1. Overview 17
18 IBM PowerVM Live Partition Mobility
2

Chapter 2. Live Partition Mobility


mechanisms
This chapter presents the components involved in Live Partition Mobility
managed by the Hardware Management Console (HMC) and their respective
roles. It discusses the compatibility, capability, and readiness of partitions and
systems to participate in inactive and active migrations. It also describes the
mechanisms of Live Partition Mobility in detail. The chapter concludes with
observations on the influence of the infrastructure of partition migration.

This chapter contains the following topics:


򐂰 2.1, “Live Partition Mobility components” on page 20
򐂰 2.2, “Live Partition Mobility prerequisites” on page 23
򐂰 2.3, “Partition migration high-level workflow” on page 26
򐂰 2.4, “Inactive partition migration” on page 27
򐂰 2.5, “Active partition migration” on page 31
򐂰 2.6, “Performance considerations” on page 42
򐂰 2.7, “AIX and active migration” on page 43
򐂰 2.8, “Linux and active migration” on page 44

Information regarding Live Partition Mobility using components managed by the


Integrated Virtualization Manager (IVM) is detailed in Chapter 7, “Integrated
Virtualization Manager for Live Partition Mobility” on page 221.

© Copyright IBM Corp. 2007, 2009. All rights reserved. 19


2.1 Live Partition Mobility components
Inactive and active partition migration from one physical system to another is
achieved through interaction between several components, as Figure 2-1 shows.

HMC
Mobile Partition Virtual I/O Server

AIX/Linux

DLPAR-RM
Mover
DLPAR-RM
Service

RMC RMC VASI

Partition Profiles
POWER Hypervisor

Service
Processor IBM Power System

Figure 2-1 Live Partition Mobility components

These components and their roles are described in the following list.
Hardware Management Console (HMC) The HMC is the central point of
control. It coordinates administrator initiation and setup of the
subsequent migration command sequences that flow between the
various partition migration components.

The HMC provides both a graphical user interface (GUI) wizard


and a command-line interface to control migration. The HMC
interacts with the service processors and POWER Hypervisor™ on
the source and destination servers, the mover service partitions,
the Virtual I/O Server partitions, and the mobile partition itself.
Resource Monitoring and Control (RMC) The RMC is a distributed
framework and architecture that allows the HMC to communicate
with a managed logical partition.
Dynamic LPAR Resource Manager This component is an RMC daemon that
runs inside the AIX, Linux, and Virtual I/O Server partitions. The
HMC uses this capability to remotely execute partition specific
commands.

20 IBM PowerVM Live Partition Mobility


Mover service partition (MSP) MSP is an attribute of the Virtual I/O Server
partition. It enables the specified Virtual I/O Server partition to
allow the function that asynchronously extracts, transports, and
installs partition state. Two mover service partitions are involved in
an active partition migration: one on the source system, the other
on the destination system. Mover service partitions are not used for
inactive migrations.
Virtual asynchronous services interface (VASI) The source and destination
mover service partitions use this virtual device to communicate
with the POWER Hypervisor to gain access to partition state. The
VASI device is included on the Virtual I/O Server, but is only used
when the server is declared as a mover service partition.
POWER Hypervisor Active partition migration requires server hypervisor
support to process both informational and action requests from the
HMC and to transfer a partition state through the VASI device in the
mover service partitions.
Virtual I/O Server Only virtual adapters can be migrated with a partition. The
physical resources that back the mobile partition’s virtual adapters
must be accessible by the Virtual I/O Servers on both the source
and destination systems.
Partition profiles The HMC copies all of the mobile partition's profiles without
modification to the target system as part of the migration process.

The HMC creates a new migration profile containing the partition's


current state. Unless you specify a profile name when the migration
is started, this profile replaces the existing profile that was last
used to activate the partition. If you specify an existing profile
name, the HMC replaces that profile with the new migration profile.
Therefore, if you do not want the migration profile to replace any of
the partition's existing profiles, you must specify a new, unique
profile name when starting the migration. All profiles belonging to
the mobile partition are deleted from the source server after the
migration has completed.

If the mobile partition's profile is part of a system profile on the


source server, then it is automatically removed after the source
partition is deleted. It is not automatically added to a system profile
on the target server.

Chapter 2. Live Partition Mobility mechanisms 21


Time reference Time reference is an attribute of partitions, including Virtual I/O
Server partitions. This partition attribute is only supported on
managed systems that are capable of active partition migration.

Synchronizing the time-of-day clocks for the source and destination


Virtual I/O Server partitions is optional for both active and inactive
partition migration. However, it is a recommended step for active
partition migration. If you choose not to complete this step, the
source and destination systems will synchronize the clocks while
the mobile partition is moving from the source system to the
destination system.

The time reference partition (TRP) setting has been introduced to


enable the POWER Hypervisor to synchronize the mobile
partition's time-of-day as it moves from one system to another. It
uses Coordinate Universal Time (UTC) derived from a common
network time protocol (NTP) server with NTP clients on the source
and destination systems. More than one TRP can be specified per
system. The POWER Hypervisor uses the longest running time
reference partition as the provider of authoritative system time. It
can be set or reset through POWER Hypervisor while the partition
is running.

2.1.1 Other components affecting Live Partition Mobility


Though not considered to be part of Live Partition Mobility, certain other IBM
Power Systems server components can influence, or can be influenced by, the
mobility of a partition.
Integrated Virtual Ethernet adapter (IVE) An IVE adapter (also referred to
as Host Ethernet Adapter) uses a two or four-port integrated
Ethernet adapter, directly attached to the POWER Hypervisor. The
hypervisor can create up to 32 logical Ethernet ports that can be
given to one or more logical partitions. This provides a partition
with a virtual Ethernet communications link, seen as a Logical Host
Ethernet Adapter (LHEA), without recourse to a shared Ethernet
adapter in a Virtual I/O Server.
Performance monitor API (PMAPI) PMAPI is an AIX subsystem comprising
commands, libraries, and a kernel extension that controls the use
of the POWER performance registers.

22 IBM PowerVM Live Partition Mobility


Barrier Synchronization Registers (BSR) Barrier synchronization registers
provide a fast, lightweight barrier synchronization between CPUs.

This facility is intended for use by application programs that are


structured in a single instruction, multiple data (SIMD) manner.
Such programs often proceed in phases where all tasks
synchronize processing at the end of each phase. The BSR is
designed to accomplish this efficiently. Barrier synchronization
registers cannot be migrated or reconfigured dynamically.

2.2 Live Partition Mobility prerequisites


Live Partition Mobility requires coordinated movement of a partition’s state and
resources. Migratable partitions move between capable, compatible, and ready
systems.

A single HMC can control several concurrent migrations. There are no


architectural restrictions on the number of migrations that can be underway at
any one time. However, a single mover service partition can handle a maximum
of four simultaneous active migrations. It is possible to have several mover
service partitions on a system. In practice, the maximum number of concurrent
migrations is limited by the processing capacity of the HMC and contention for
HMC locks.

2.2.1 Capability and compatibility


The first step of any mobility operation is to validate the capability and
compatibility of the source and destination systems. The high-level prerequisites
for Live Partition Mobility are in the following list. If any of these elements are
missing, a migration cannot occur:
򐂰 A ready source system that is capable of migration
򐂰 A ready destination system that is capable of migration
򐂰 Compatibility between the source and destination systems
򐂰 The source and destination systems, which may be under the control of a
single HMC and may also include a redundant HMC

Note: Beginning with HMC Version 7 Release 3.4, the destination system
may be managed by a remote HMC. Mobility operations to a remotely
managed destination system is discussed in 5.4, “Remote Live Partition
Mobility” on page 130.

Chapter 2. Live Partition Mobility mechanisms 23


򐂰 A migratable, ready partition to be moved from the source system to the
destination system. For an inactive migration, the partition must be powered
down, but must be capable of booting on the destination system.
򐂰 For active migrations, a mover service partition on the source and destination
systems
򐂰 One or more storage area networks (SAN) that provide connectivity to all of
the mobile partition’s disks to the Virtual I/O Server partitions on both the
source and destination servers. The mobile partition accesses all migratable
disks through virtual Fibre Channel, or virtual SCSI, or a combination of these
devices. The LUNs used for virtual SCSI must be zoned and masked to the
Virtual I/O Servers on both systems. Virtual Fibre Channel LUNs should be
configured as described in Chapter 2 of PowerVM Virtualization on IBM
System p: Managing and Monitoring, SG24-7590. Hardware-based iSCSI
connectivity may be used in addition to SAN. SCSI reservation must be
disabled.
򐂰 The mobile partition’s virtual disks, which must be mapped to LUNs and
cannot be part of a storage pool or logical volume on the Virtual I/O Server
򐂰 One or more physical IP networks (LAN) that provide the necessary network
connectivity for the mobile partition through the Virtual I/O Server partitions
on both the source and destination servers. The mobile partition accesses all
migratable network interfaces through virtual Ethernet devices.
򐂰 An RMC connection to manage inter-system communication

Before initiating the migration of a partition, the HMC verifies the capability and
compatibility of the source and destination servers, and the characteristics of the
mobile partition to determine whether or not a migration is possible.

The hardware, firmware, Virtual I/O Servers, mover service partitions, operating
system, and HMC versions that are required for Live Partition Mobility along with
the system compatibility requirements are described in Chapter 3,
“Requirements and preparation” on page 45.

2.2.2 Readiness
Migration readiness is a dynamic partition property that changes over time.

Server readiness
A server that is running on battery power is not ready to receive a mobile
partition; it cannot be selected as a destination for partition migration. A server
that is running on battery power may be the source of a mobile partition; indeed,
that it is running on battery power may be the impetus for starting the migration.

24 IBM PowerVM Live Partition Mobility


Infrastructure readiness
A migration operation requires a SAN and a LAN to be configured with their
corresponding virtual SCSI, virtual Fibre Channel, VLAN, and virtual Ethernet
devices. At least one Virtual I/O Server on both the source and destination
systems must be configured as a mover service partitions for active migrations.
The HMC must have RMC connections to the Virtual I/O Servers and a
connection to the service processors on the source and destination servers. For
an active migration, the HMC also needs RMC connections to the mobile
partition and the mover service partitions.

2.2.3 Migratability
The term migratability refers to a partition’s ability to be migrated and is distinct
from partition readiness. A partition may be migratable but not ready. A partition
that is not migratable may be made migratable with a configuration change. For
active migration, consider whether a shutdown and reboot is required. When
considering a migration, also consider the following additional prerequisites:
򐂰 General prerequisites:
– The memory and processor resources required to meet the mobile
partition’s current entitlements must be available on the destination server.
– The partition must not have any required dedicated physical adapters.
– The partition must not have any logical host Ethernet adapters.
– The partition is not a Virtual I/O Server.
– The partition is not designated as a redundant error path reporting
partition.
– The partition does not have any of its virtual SCSI disks defined as logical
volumes in any Virtual I/O Server. All virtual SCSI disks must be mapped
to LUNs visible on a SAN or iSCSI.
– The partition has virtual Fibre Channel disks configured as described in
Section 5.11, “Virtual Fibre Channel” on page 187.
– The partition is not part of an LPAR workload group. A partition can be
dynamically removed from a group.
– The partition has a unique name. A partition cannot be migrated if any
partition exists with the same name on the destination server.
򐂰 In an inactive migration only, the following characteristics apply:
– It is a partition in the Not Activated state
– May use huge pages
– May use the barrier synchronization registers

Chapter 2. Live Partition Mobility mechanisms 25


򐂰 In an active migration only, the two default server serial adapters that are
automatically created and assigned to a partition when a partition is created
are automatically recreated on the destination system by the migration
process.

2.3 Partition migration high-level workflow


Inactive and active partition migration each have the same four-step sequence:
1. Preparation
Ready the infrastructure to support Live Partition Mobility.
2. Validation
Check the configuration and readiness of the source and destination systems.
3. Migration
Transfer of partition state from the source to destination takes place. One
command is used to launch both inactive and active migrations. The HMC
determines the appropriate type of migration to use based on the state of the
mobile partition.
– If the partition is in the Not Activated state, the migration is inactive.
– If the partition is in the Running state, the migration is active.
4. Completion
Free unused resources on the source system and the HMC.

The remainder of this chapter describes the inactive and active migration
processes.

Note: As part of a migration process, the HMC copies all of the mobile
partition’s profiles as-is to the destination system. The HMC also creates a
new migration profile containing the partition’s current state and, unless you
specify a profile name, this profile replaces the existing profile that was last
used to activate the partition. If you specify an existing profile name, the HMC
replaces that profile with the new migration profile. Therefore, if you want to
keep the partition’s existing profiles, you should specify a new and unique
profile name when initiating the migration. If you add an adapter (physical or
virtual) to a partition using dynamic reconfiguration, it is added to the profile as
desired.

26 IBM PowerVM Live Partition Mobility


2.4 Inactive partition migration
Inactive partition migration allows you to move a powered-off partition, and its
profiles and virtualized resources, from one server to another. The mobile
partition retains its name, its inactive state, and its NVRAM. Its virtual I/O
resources are assigned and remapped to the appropriate Virtual I/O Server
partitions on the destination system. Its processor and memory resources remain
unassigned until it is activated.

2.4.1 Introduction
The HMC is the central point of control, coordinating administrator actions and
migration command sequences. Because the mobile partition is powered off,
only the static partition state (definitions and configurations) is transferred from
source to destination. The transfer is performed by the controlling HMC, the
service processors, and the POWER Hypervisor on the two systems; there is no
dynamic state, so mover service partitions are not required.

The HMC creates a migration profile for the mobile partition on the destination
server corresponding to its current configuration. All profiles associated with the
mobile partition are moved to the destination server after the partition definition
has been created on the destination server.

Note: Because the HMC always migrates the latest activated profile, an
inactive partition that has never been activated is not migratable. To meet this
requirement, booting to an operating system is unnecessary; booting to the
SMS menu is sufficient. Any changes to the latest activated profile after
power-off are not preserved. To save the changes, the mobile partition must
be reactivated and shut down.

Chapter 2. Live Partition Mobility mechanisms 27


2.4.2 Validation phase
The HMC performs a pre-check to ensure that you are performing a valid
migration, that no high-level blocking problems exist, and that the migration has a
good chance of being successful. The validation workflow is schematically shown
in Figure 2-2.

Source

Inactive partition migration capability


System

and compatibiliy check


Destination
System

Source

RMC connection
Virtual I/O
Server Virtual

check
adapter
Destination mapping
Virtual I/O
Server

Partition
Mobile
Readiness
Partition check
Time

Figure 2-2 Inactive migration validation workflow

The inactive migration validation process performs the following operations:


򐂰 Checks the Virtual I/O Server and hypervisor migration capability and
compatibility on the source and destination
򐂰 Checks that resources (processors, memory, and virtual slots) are available to
create a shell partition on the destination system with the exact configuration
of the mobile partition
򐂰 Verifies the RMC connections to the source and destination Virtual I/O
Servers
򐂰 Ensures that the partition name is not already in use at the destination
򐂰 Checks for virtual MAC address uniqueness
򐂰 Checks that the partition is in the Not Activated state
򐂰 Ensures that the mobile partition is an AIX or Linux partition, is not an
alternate path error logging partition, is not a service partition, and is not a
member of a workload group
򐂰 Ensures that the mobile partition has an active profile
򐂰 Checks the number of current inactive migrations against the number of
supported inactive migrations

28 IBM PowerVM Live Partition Mobility


򐂰 Checks that all required I/O devices are connected to the mobile partition
through a Virtual I/O Server, that is, there are no required physical adapters
򐂰 Verifies that the virtual SCSI disks assigned to the partition are accessible by
the Virtual I/O Servers on the destination system.
򐂰 Creates the virtual adapter migration map that associates adapters on the
source Virtual I/O Servers with adapters on the destination Virtual I/O Servers
򐂰 Ensures that no virtual SCSI disks are backed by logical volumes and that no
virtual SCSI disks are attached to internal disks (not on the SAN)

2.4.3 Migration phase


If all the pre-migration checks pass, the migration phase can start. For inactive
partition migration, the transfer of state follows a path:
1. From the source hypervisor to the HMC
2. From the HMC to the destination hypervisor

This path is shown in Figure 2-3.

1 HMC
2

POWER Hypervisor POWER Hypervisor

Source System Destination System

Figure 2-3 Inactive migration state flow

Chapter 2. Live Partition Mobility mechanisms 29


The inactive migration workflow is shown in Figure 2-4.

Source LPAR
System removal

Destination New LPAR


System creation

Validation
Source Virtual
Virtual I/O

Notification of
Storage

completion
Server removal

Destination
Virtual storage
Virtual I/O adapter setup
Server

Mobile
Partition

Time

Figure 2-4 Inactive migration workflow

The HMC performs the following workflow steps:


1. Inhibits any changes to the source system and the mobile partition that might
invalidate the migration.
2. Extracts the virtual device mappings from the source Virtual I/O Servers and
uses this to generate a source-to-destination virtual adapter migration map.
This map ensures no loss of multipath I/O capability for virtual SCSI, virtual
Fibre Channel, and virtual Ethernet. The HMC fails the migration request if
the device migration map is incomplete.
3. Creates a compatible partition shell on the destination system.
4. Creates a migration profile for the mobile partition’s current (last-activated)
profile. If the mobile partition was last activated with profile my_profile and
resources were moved in to or out of the partition before the partition was
shutdown, the migration profile will differ from that of my_profile.
5. Copies over the partition profiles. Copying includes all existing profiles
associated with the mobile partition on the source system and the migration
profile. The existing partition profiles are not modified at all during the
migration; the virtual devices are not re-mapped to the new system.

30 IBM PowerVM Live Partition Mobility


6. Creates the required adapters (virtual SCSI, virtual Fibre Channel, or both) in
the Virtual I/O Servers on the destination system and completes the logical
unit number (LUN) to virtual SCSI adapter mapping as well as the
NPIV-enabled adapter to virtual Fibre Channel adapter mapping.
7. On completion of the transfer of state, HMC sets the migration state to
completed and informs the POWER Hypervisor on both the source and
destination.

2.4.4 Migration completion phase


When the migration is complete, unused resources are deleted, as follows:
1. The source Virtual I/O Servers remove the virtual adapter slots and virtual
target devices used by the mobile partition. The HMC removes the virtual
slots from the source Virtual I/O Server’s profile.
2. The HMC deletes the partition on the source server.

Note: Virtual slot numbers can change during migration. When moving a
partition to a server and then back to the original, it will not have the same slot
numbers. If this information is required, you should record the slot numbers.

2.4.5 Stopping an inactive partition migration


You can stop an inactive partition migration from the controlling HMC while the
partition is in the Migration starting state. The HMC performs automatic rollback
of all reversible changes and identifies all non-reversible changes, if any.

2.5 Active partition migration


The active partition migration function provides the capability to move a running
operating system, hosted middleware, and applications between two systems
without disrupting the service provided. Databases, application servers, network
and SAN connections, and user applications are all transferred in a manner
transparent to users. The mobile partition retains its name, its active state, its
NVRAM, its profiles, and its current configuration. Its virtual I/O resources are
assigned and remapped to the appropriate Virtual I/O Server partitions on the
destination system.

Chapter 2. Live Partition Mobility mechanisms 31


2.5.1 Active partition state
In addition to the partition definition and resource configuration, active migration
involves the transfer of active run-time state. This state includes the:
򐂰 Partition’s memory
򐂰 Hardware page table (HPT)
򐂰 Processor state
򐂰 Virtual adapter state
򐂰 Non-volatile RAM (NVRAM)
򐂰 Time of day (ToD)
򐂰 Partition configuration
򐂰 State of each resource

The mover service partitions on the source and destination, under the control of
the HMC, move this state between the two systems.

2.5.2 Preparation
After you have created the Virtual I/O Servers and enabled the mover service
partitions, you must prepare the source and destination systems for migration:
1. Synchronize the time-of-day clocks on the mover service partition using an
external time reference, such as the network time protocol (NTP). This step is
optional; it increases the accuracy of time measurement during migration.
The step is not required by the migration mechanisms. Even if this step is
omitted, the migration process correctly adjusts the partition time. Time never
goes backward on the mobile partition during a migration.
2. Prepare the partition for migration:
a. Use dynamic reconfiguration on the HMC to remove all dedicated I/O,
such as PCI slots, GX slots, virtual optical devices, and Integrated Virtual
Ethernet from the mobile partition.
b. Remove the partition from a partition workload group.
3. Prepare the destination Virtual I/O Server.
a. Configure the shared Ethernet adapter as necessary to bridge VLANs.
b. Configure the SAN such that requisite storage devices are available.

32 IBM PowerVM Live Partition Mobility


4. Initiate the partition migration by selecting the following items, with either the
graphical user interface (GUI) or command-line interface (CLI) on the HMC:
– The partition to migrate
– The destination system
– Optionally, the mover service partition on the source and destination
systems
If there is only one active mover service partition on the source or the
destination server, the mover service partition selection is automatic. If
there are multiple active mover service partitions on one or both, you can
either specify which ones to use, or let the HMC choose for you.
– Optionally, the virtual device mappings in the destination Virtual I/O
Server. See 5.7, “The command-line interface” on page 162 for details.

2.5.3 Validation phase


After the source and destination mover service partitions have been identified,
the HMC performs a pre-check to ensure that the migration is valid, that no
high-level blocking problems exist, and that the environment satisfies the
prerequisites for a migration operation.

After the pre-check, the HMC prevents any configuration changes to the partition
that might invalidate the migration and then proceeds to perform a detailed
capability, compatibility, migratability, and readiness check on the source and
destination systems.

All configuration checks are performed during each validation to provide a


complete list of potential problems.

Chapter 2. Live Partition Mobility mechanisms 33


The workflow for the active migration validation is shown in Figure 2-5.

Source

Active partition migration capability


System

and compatibiliy check


Destination System resource
System availability check

Source
Virtual I/O

RMC connection check


Server Virtual
adapter
Destination mapping
Virtual I/O
Server

Mobile Partition Operating System


Readiness and application
Partition check readiness check
Time

Figure 2-5 Active migration validation workflow

Configuration checks
The HMC performs the following configuration checks:
򐂰 Checks the source and destination systems, POWER Hypervisor, Virtual I/O
Servers, and mover service partitions for active partition migration capability
and compatibility
򐂰 Checks that the RMC connections to the mobile partition, the source and
destination Virtual I/O Servers, and the connection between the source and
destination mover service partitions are established
򐂰 Checks that there are no required physical adapters in the mobile partition
and that there are no required virtual serial slots higher than slot 2
򐂰 Checks that no client virtual SCSI disks on the mobile partition are backed by
logical volumes and that no disks map to internal disks
򐂰 Checks the mobile partition, its OS, and its applications for active migration
capability. An application registers is capability with AIX, and may block
migrations
򐂰 Checks that the logical memory block size is the same on the source and
destination systems
򐂰 Checks that the type of the mobile partition is AIX or Linux and that it is not an
alternate error logging partition or not a mover service partition
򐂰 Checks that the mobile partition is not configured with barrier synchronization
registers
򐂰 Checks that the mobile partition is not configured with huge pages

34 IBM PowerVM Live Partition Mobility


򐂰 Checks that the partition state is active or running
򐂰 Checks that the mobile partition is not in a partition workload group
򐂰 Checks the uniqueness of the mobile partition’s virtual MAC addresses
򐂰 Checks that the mobile partition’s name is not already in use on the
destination server
򐂰 Checks the number of current active migrations against the number of
supported active migrations

Resource availability checks


After verifying system and partition configurations, the HMC determines whether
sufficient resources are available on the destination server to host the inbound
mobile partition, as follows. The HMC performs the following tasks:
1. Checks that the necessary resources (processors, memory, and virtual slots)
are available to create a shell partition on the destination system with the
exact configuration of the mobile partition.
2. Generates a source-to-destination hosting virtual adapter migration map,
ensuring no loss of multipath I/O capability for virtual SCSI, virtual Fibre
Channel, and virtual Ethernet. The HMC fails the migration request if the
device migration map is incomplete.
3. Instructs the operating system in the mobile partition to check its own capacity
and readiness for migration. AIX passes the check-migrate request to those
applications and kernel extensions that have registered to be notified of
dynamic reconfiguration events. The operating system either accepts or
rejects the migration. In the latter case, the HMC fails the migration request.

This is the end of the validation phase. At this point, there have been no state
changes to the source and destination systems or to the mobile partition. The
HMC inhibits all further dynamic reconfiguration of the mobile partition that might
invalidate the migration: CPU, memory, slot, variable capacity weight, processor
entitlement, and LPAR group.

The partition migration phase is ready to start.

Chapter 2. Live Partition Mobility mechanisms 35


2.5.4 Partition migration phase
If all the validation checks pass, then the HMC initiates the migration procedure.
From this point forward, all state changes are rolled back in the event of an error.

Figure 2-6 shows the activities and workflow of the migration phase of an active
migration.

LPAR running on source system LPAR running on destination system

Source LPAR
System removal

Destination New LPAR


System creation

Validation
Source Virtual
Virtual I/O

Notification of completion
SCSI & FC

MSP Setup
Server removal
Memory
copy
Destination Virtual SCSI
Virtual I/O & FC adapter
Server setup

Mobile
Partition

Time

Figure 2-6 Migration phase of an active migration

For active partition migration, the transfer of partition state follows a path:
1. From the mobile partition to the source system’s hypervisor.
2. From the source system’s hypervisor to the source mover service partition.
3. From the source mover service partition to the destination mover service
partition.
4. From the destination mover service partition to the destination system's
hypervisor.
5. From the destination system’s hypervisor to the partition shell on the
destination.

36 IBM PowerVM Live Partition Mobility


The path is shown in Figure 2-7.

Mover Service 3 Mover Service Arriving


Departing Partition Partition Mobile Partition
Mobile Partition
Shell
VASI VASI

1 2 4 5

POWER Hypervisor POWER Hypervisor

Source System Destination System

Figure 2-7 Active migration partition state transfer path

The migration process consists of the following steps:


1. The HMC creates a compatible partition shell on the destination system. This
shell partition is used to reserve the resources required to receive the inbound
mobile partition.
The pending values of the mobile partition (changes made to the partition’s
profile since activation) are not preserved across the migration; the current
values of the partition on the source system become both the pending and the
current values of the partition on the destination system. The configuration of
the partition on the source system includes:
– Processor configuration, which is dedicated or shared processors,
processor counts, and entitlements (minimum, maximum, and desired)
– Memory configuration (minimum, maximum, and desired)
– Virtual adapter configuration
The creation of the partition shell on the destination system ensures that all
required resources are available for the mobile partition and cannot be stolen
during the migration. The current partition profile associated with the mobile
partition is created on the destination system.
2. The HMC configures the mover service partitions on the source and
destination systems. These two movers establish:
– A connection to their respective POWER Hypervisor through the VASI
adapter
– A private, full-duplex communications channel between themselves, over a
standard TCP/IP connection, for transporting the moving partition’s state
3. The HMC issues a prepare for migration event to the migrating operating
system (still on the source system), giving the mobile partition the opportunity

Chapter 2. Live Partition Mobility mechanisms 37


to get ready to be moved. The operating system passes this event to
registered kernel extensions and applications, so that they may take any
necessary actions, such as reducing memory footprint, throttling workloads,
adjusting heartbeats, and other timeout thresholds. The operating system
inhibits access to the PMAPI registers and zeroes internal counters upon
receipt of this event.
If the partition is not ready to perform a migration at this time, then it returns a
failure indicator to the HMC, which cancels the migration and rolls back all
changes.
4. The HMC creates the virtual target devices, virtual Fibre Channel, and virtual
SCSI server adapters in each of the Virtual I/O Servers on the destination
system that will host the virtual SCSI and virtual Fibre Channel client
adapters of the mobile partition. This step uses the virtual adapter migration
map created during the validation phase. Migration stops if an error occurs.
5. The mover on the source system starts sending the partition state to the
mover on the destination system, copying the mobile partition’s physical
pages to the physical memory reserved by the partition shell on the
destination.
6. Because the mobile partition is still active, with running applications, its state
continues to change while the memory is being moved from one system to the
other. Memory pages that are modified during the transfer of state are marked
modified, or dirty. After the first pass, the source mover re-sends all the dirty
pages. This process is repeated until the number of pages marked as dirty at
the end of each loop no longer decreases, or is considered sufficiently small,
or a timeout is reached.
Based on the total number of pages associated with partition state and the
number of pages left to transmit, the mover service partition instructs the
hypervisor on the source system to suspend the mobile partition.
7. The mobile partition confirms the suspension by quiescing all its running
threads.
The partition is now suspended. Start of suspend window period.
8. During the partition suspension, the source mover service partition continues
to send partition state to the destination server.
The partition is now resumed.
9. The mobile partition resumes execution on the destination server,
re-establishing its operating environment. This is the point of no return; the
migration can no longer be rolled back to the source. If the migration fails after
this, recovery will complete the migration on to the destination system.
The mobile partition might resume execution before all its memory pages
have been copied to the destination. If the mobile partition requires a page

38 IBM PowerVM Live Partition Mobility


that has not yet been migrated, the page is demand-paged from the source
system. This technique significantly reduces the length of the pause, during
which the partition is unavailable.
10.The mobile partition recovers I/O, retrying all pending I/O requests that were
not completed while on the source system. It also sends a gratuitous ARP
request on all VLAN virtual adapters to update the ARP caches in the various
switches and systems in the external network.
The partition is now active and visible again. End of suspend window period.
11.When the destination mover service partition receives the last dirty page from
the source system, the migration is complete.

The suspend window period (from end of step 7 through end of step 10) lasts
only a few seconds.

2.5.5 Migration completion phase


The final steps of the migration return all resources to the source and destination
systems and restore the partition to its fully functional state, as follows:
1. With the completion of the state transfer, the communications channel
between the two mover service partitions is closed along with their VASI
connections to their respective POWER Hypervisor.
2. The Virtual I/O Servers on the source system remove the adapters (virtual
Fibre Channel, SCSI server, or both) associated with the mobile partition by:
– Unlocking the virtual SCSI and virtual Fibre Channel server adapters
– Removing the device-to-LUN mappings for virtual SCSI
– Removing the virtual Fibre Channel-to-NPIV enabled physical adapter
mappings
– Closing the device drivers
– Deleting the virtual SCSI and virtual Fibre Channel server adapters
3. On the mobile partition, AIX notifies all registered kernel extensions and
applications that the migration is complete so that they may perform any
required recovery operations.
4. The HMC informs the source and destination mover service partitions that the
migration is complete and that they can delete the migration data from their
tables.
5. The HMC deletes the mobile partition and all its profiles on the source server.
6. You may now add dedicated I/O adapters, as required, by using dynamic
reconfiguration, and add the mobile partition to a custom system group.

Chapter 2. Live Partition Mobility mechanisms 39


2.5.6 Virtual I/O Server selection
If the HMC cannot find a virtual adapter mapping for a migration, the migration is
halted at the validation phase.

The HMC must identify at least one possible destination Virtual I/O Server for
each virtual SCSI and virtual Fibre Channel client adapter assigned to the mobile
partition, or the HMC fails the pre-check or migration. Destination Virtual I/O
Servers must have access to all LUNs used by the mobile partition. If multiple
source-to-destination Virtual I/O Server combinations are possible for virtual
adapter mappings, and you have not specified a mapping, the HMC selects one
of them.

Suggested mappings are given if the following criteria are met:


򐂰 The mobile partition's virtual client SCSI and virtual Fibre Channel adapters
that are assigned to a single Virtual I/O Server on the source server will be
assigned to a single Virtual I/O Server on the destination system.
򐂰 The mobile partition's virtual SCSI and virtual Fibre Channel client adapters
that are assigned to two or more different Virtual I/O Servers on the source
system will be assigned to the same number of Virtual I/O Servers on the
destination system.

Failure to find a suggested good match during the separately-run migration


pre-check happens if either of the following statements is true:
򐂰 The mobile partition's virtual SCSI and virtual Fibre Channel client adapters
that are currently assigned to a single Virtual I/O Server on the source server
have to be assigned to different Virtual I/O Servers on the destination system.
򐂰 The mobile partition's virtual SCSI and virtual Fibre Channel client adapters
that are currently assigned to different Virtual I/O Server on the source server
will have to be assigned to a single Virtual I/O Server on the destination.

If the destination Virtual I/O Servers cannot access all the VLANs required by the
mobile partition, the HMC halts the migration.

40 IBM PowerVM Live Partition Mobility


Both possible and suggested HMC-selected Virtual I/O Servers, if they exist, are
viewable in HMC through the GUI and CLI lslparmigr -r virtualio command,
as displayed in Example 2-1.

Example 2-1 Sample output of the lslparmigr -r virtualio command


$ lslparmigr -r virtualio -m 9117-MMA-SN100F6A0-L9 \
-t 9117-MMA-SN101F170-L10 --filter lpar_names=PROD

possible_virtual_scsi_mappings=30/VIOS1_L10/1,\
suggested_virtual_scsi_mappings=30/VIOS1_L10/1,\
possible_virtual_fc_mappings=none,\
suggested_virtual_fc_mappings=none

2.5.7 Source and destination mover service partitions selection


The HMC selects a source and destination mover service partition unless you
provide them explicitly. If no movers are available, the migration fails. Valid source
and destination mover service partitions pairs that can be used for a migration
can be seen with the HMC GUI and HMC CLI with the lslparmigr -r msp
command, as displayed in Example 2-2.

Example 2-2 Sample output of the lslparmigr -r msp command


$ lslparmigr -r msp -m 9117-MMA-SN100F6A0-L9 -t 9117-MMA-SN101F170-L10\
--filter lpar_names=PROD

source_msp_name=VIOS1_L9,source_msp_id=1,dest_msp_names=VIOS1_L10,\
dest_msp_ids=1,ipaddr_mappings=9.3.5.3//1/VIOS1_L10/9.3.5.111/

If either of the chosen mover service partitions determines that its VASI cannot
handle a migration or if the HMC receives a VASI device error from a mover
service partition, the HMC stops the migration with an error.

2.5.8 Stopping an active migration


You can stop an active partition migration through the controlling HMC while the
mobile partition is in the Migration starting state. If stopped during the allowable
window, the partition remains on the source system as though the migration had
not been started. The allowable window is while the partition is in the Migration
starting state on the source system. If you try to stop a migration after the
Migration starting state, then the HMC takes no action other than displaying an
error message.

Chapter 2. Live Partition Mobility mechanisms 41


If the source or destination server is powered down after the HMC has enabled
suspension on the mobile partitions, the HMC must stop the migration and
perform a rollback of all reversible changes. When the hypervisor resumes
operation, the partitions come back up in the powered off state, with a migration
state of invalid.

2.6 Performance considerations


Active partition migration involves moving the state of a partition from one system
to another while the partition is still running. The mover service partitions working
with the hypervisor use partition virtual memory functions to track changes to
partition memory state on the source system while it is transferring memory state
to the destination system.

During the migration phase, an initial transfer of the mobile partition’s physical
memory from the source to the destination occurs. Because the mobile partition
is still active, a portion of the partition’s resident memory will almost certainly
have changed during this pass. The hypervisor keeps track of these changed
pages for retransmission to the destination system in a dirty page list. It makes
additional passes through the changed pages until the mover service partitions
detects that a sufficient amount of pages are clean or the timeout is reached.

The speed and load of the network that is used to transfer state between the
source and destination systems influence the time required for both the transfer
of the partition state and the performance of any remote paging operations.

The amount of changed resident memory after the first pass is controlled more
by write activity of the hosted applications than by the total partition memory size.
Nevertheless, a reasonable assumption is that partitions with a large memory
requirement have higher numbers of changed resident pages than smaller ones.

To ensure that active partition migrations are truly nondisruptive, even for large
partitions, the POWER Hypervisor resumes the partition on the destination
system before all the dirty pages have been migrated over to the destination. If
the mobile partition tries to access a dirty page that has not yet been migrated
from the source system, the hypervisor on the destination sends a demand
paging request to the hypervisor on the source to fetch the required page.

Providing a high-performance network between the source and destination


mover partitions and reducing the partition’s memory update activity prior to
migration will improve the latency of the state transfer phase of migration. We
suggest using a dedicated network for state transfer, with a nominal bandwidth of
at least 1 Gbps.

42 IBM PowerVM Live Partition Mobility


2.7 AIX and active migration
An AIX partition continues running during an active migration. Most AIX features
work seamlessly before, during, and after the migration. These include, but are
not limited to, the following features:
򐂰 System and advanced accounting
򐂰 Workload manager
򐂰 System trace
򐂰 Resource sets
Including exclusive-use processor resource sets
򐂰 Pinned memory
򐂰 Memory affinity
See 5.8, “Migration awareness” on page 177 for details about memory affinity
considerations.
򐂰 Kernel and kernel extensions
See 5.10, “Making kernel extension migration aware” on page 185 for details
about how to make a kernel extension migration aware.
򐂰 Large memory pages
Huge memory pages cannot be used
򐂰 Processor binding
Processes remain bound to the same logical processor throughout the
migration. See 5.8, “Migration awareness” on page 177 for details on
processor binding considerations.

Performance monitoring tools (such as commands topas, tprof, filemon, and so


on) can run on a mobile partition during an active migration. However, the data
that these tools report during the migration process might not be significant,
because of underlying hardware changes, performance monitor counters that
may be reset, and so on.

Although AIX is migration safe, verify that any applications you are running are
migration safe or aware. See 5.8, “Migration awareness” on page 177 for more
information.

Chapter 2. Live Partition Mobility mechanisms 43


2.8 Linux and active migration
A Linux partition continues running during an active migration. Many features on
supported Linux operating systems work seamlessly before, during, and after
migration, such as IBM RAS tools and dynamic reconfiguration.

Similar to AIX, Linux is migration-safe. A good idea is to verify that any


applications not included in the full distributions of the supported Linux operating
systems are migration-safe or aware.

44 IBM PowerVM Live Partition Mobility


3

Chapter 3. Requirements and


preparation
In this chapter, we discuss the preparatory steps required for a logical partition to
be migrated from one system to another system.

This chapter contains the following topics:


򐂰 3.1, “Introduction” on page 46
򐂰 3.2, “Skill considerations” on page 46
򐂰 3.3, “Requirements for Live Partition Mobility” on page 47
򐂰 3.4, “Live Partition Mobility preparation checks” on page 53
򐂰 3.5, “Preparing the systems for Live Partition Mobility” on page 54
򐂰 3.6, “Preparing the HMC for Live Partition Mobility” on page 61
򐂰 3.7, “Preparing the Virtual I/O Servers” on page 63
򐂰 3.8, “Preparing the mobile partition for mobility” on page 66
򐂰 3.9, “Configuring the external storage” on page 79
򐂰 3.10, “Network considerations” on page 87
򐂰 3.11, “Distance considerations” on page 88

© Copyright IBM Corp. 2007, 2009. All rights reserved. 45


3.1 Introduction
Requirements and preparation must be fulfilled whether you perform an inactive
or an active partition migration. As previously described:
򐂰 Inactive partition migration allows you to move a powered-off logical partition,
including its operating system and applications, from one system to another.
򐂰 Active partition migration is the ability to move a running logical partition,
including its operating system and applications, from one system to another
without disrupting the operation of that logical partition.

When you have ensured that all these requirements are satisfied and all
preparation tasks are completed, the HMC verifies and validates the Live
Partition Mobility environment. If this validation turns out to be successful, then
you can initiate the partition migration by using the wizard on the HMC graphical
user interface (GUI) or through the HMC command-line interface (CLI).

Note: Information about preparation and requirements with the Integrated


Virtualization Manager can be found in Chapter 7, “Integrated Virtualization
Manager for Live Partition Mobility” on page 221.

3.2 Skill considerations


Live Partition Mobility builds on top of several existing technologies. Familiarity
with them is helpful when working with Live Partition Mobility. This book assumes
you have a working knowledge of the following topics:
򐂰 PowerVM virtualization
– Virtual I/O Server
– Virtual SCSI
– Virtual and shared Ethernet
See PowerVM Virtualization on IBM System p: Introduction and Configuration
Fourth Edition, SG24-7940.
򐂰 Hardware Management Console (HMC)
򐂰 Storage area networks; configuring shared storage is required for Live
Partition Mobility
򐂰 Dynamic logical partitioning
򐂰 AIX or Linux

46 IBM PowerVM Live Partition Mobility


3.3 Requirements for Live Partition Mobility
Major requirements for active Live Partition Mobility are:
򐂰 Hardware Management Console (HMC) requirements
– Version 7 Release 3.2.0 or later with required fixes MH01062 for both
active and inactive partition migration. If you do not have this level,
upgrade the HMC to the correct level.
– Model 7310-CR2 or later, or the 7310-C03
򐂰 Source and destination system requirements
– The source and destination system must be an IBM Power Systems
POWER6 or POWER7 technology-based model.
A system is capable of being either the source or destination of a migration
if it contains the necessary processor hardware to support it. We call this
additional hardware capability migration support.

Note: Migration possibilities between systems with different processor


types is discussed in 5.12, “Processor compatibility modes” on
page 205.

– Both source and destination systems must have the PowerVM Enterprise
Edition license code installed. To check, use the HMC to:
i. In the navigation area, expand Systems Management.
ii. Select the system in the navigation area.
iii. Expand the Capacity on Demand (CoD) section in the task list by
clicking on it. Select the Enterprise Enablement option and expand it
by clicking on it.
iv. Select View History Log.
The CoD Advanced Functions Activation History Log panel opens.
Figure 3-1 on page 48 shows the activation of Enterprise Edition for
Live Partition Mobility.

Chapter 3. Requirements and preparation 47


Figure 3-1 Activation of Enterprise Edition

v. Click Close.
vi. If the Enterprise Edition code is not activated, you must repeat the first
three steps and then select Enter Activation Code to enable Live
Partition Mobility as shown on Figure 3-2.

Figure 3-2 Enter activation code

– Both source and destination systems must be at firmware level 01Ex320


or later, where x is an S for BladeCenter®, an L for Entry servers (such as
the Power 520, Power 550, and Power 560), an M for Midrange servers
(such as the Power 570) or an H for Enterprise servers (such as the Power
595). To upgrade the firmware, see the firmware fixes Web site:
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.j
sp?topic=/ipha5/fix_serv_firm_kick.htm
The current firmware level can be checked after completing the following
steps on the HMC:
i. In the navigation are, open Systems Management and select the
system.

48 IBM PowerVM Live Partition Mobility


ii. Select Updates in the task list
iii. By selecting View system information a new pop-up window called
Specify LIC Repository will appear.
iv. Select None - Display current values in this new window and click
OK. Finally the current firmware level appears in the new window
called View system information. This is shown in Figure 3-3. If the
version is not at the required level for Live Partition Mobility, you have to
perform an update through the HMC by selecting Upgrade Licensed
Internal Code to a new release from Updates in the task list.

Figure 3-3 Checking the current firmware level

Note: You can also check the firmware level by executing the lslic
command on the HMC.

Although there is a minimum required firmware level, each system may


have a different level of firmware. The level of source system firmware
must be compatible with the destination firmware.

Chapter 3. Requirements and preparation 49


Table 3-1 gives an overview of the supported mixed firmware levels. For a
current list of firmware level compatibilities, and how to migrate, see Live
Partition Mobility Support for Power Systems:
http://www14.software.ibm.com/webapp/set2/sas/f/pm/migrate.html
Or check the IBM Prerequisite Web site for POWER6 and POWER7
compatibility:
https://www-912.ibm.com/e_dir/eserverprereq.nsf

Note: On the IBM Prerequisite Web site:


򐂰 Choose the Software tab
򐂰 In the OS/Firmware dropdown, select Live Partition Mobility between
POWER6 and POWER7
򐂰 In the Product dropdown, select Live Partition Mobility
򐂰 For the Function, select ALL Functions

Table 3-1 Supported migration matrix


To

From EM320_031 EM320_040 EM320_046 EM320_061 EM330_028 EM340_039


or higher or higher or higher

EM320_031 Supported Supported Supported Blocked Blocked Blocked

EM320_040 Supported Supported Supported Blocked Blocked Blocked

EM320_046 Supported Supported Supported Supported Supported Supported

EM320_061 Blocked Blocked Supported Supported Supported Supported


or higher

EM330_028 Blocked Blocked Supported Supported Supported Supported


or higher

EM340_039 Blocked Blocked Supported Supported Supported Supported


or higher

򐂰 Source and destination Virtual I/O Server requirements


– At least one Virtual I/O Server at release level 1.5.1.1or higher has to be
installed both on the source and destination systems.
– A new partition attribute, called the mover service partition, has been
defined that enables you to indicate whether a mover-capable Virtual I/O
Server partition should be considered during the selection process of the
MSP for a migration. By default, all Virtual I/O Server partitions have this
new partition attribute set to FALSE.

50 IBM PowerVM Live Partition Mobility


– In addition to having the mover partition attribute set to TRUE, the source
and destination mover service partitions communicate with each other
over the network. On both the source and destination servers, the Virtual
Asynchronous Services Interface (VASI) device provides communication
between the mover service partition and the POWER Hypervisor.
To determine the current release of the Virtual I/O Server and to see if an
upgrade is necessary, use the ioslevel command.
More technical information about the Virtual I/O Server and latest downloads
are on the Virtual I/O Server Web site:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html
򐂰 Operating system requirements
The operating system running in the mobile partition has to be AIX or Linux. A
Virtual I/O Server logical partition or a logical partition running the IBM i
operating system cannot be migrated. The operating system must be at one
of the following levels:
– AIX 5L™ Version 5.3 Technology Level 7 or later (the required level is
5300-07-01)
– AIX Version 6.1 or later (the required level is 6100-00-01)
– Red Hat Enterprise Linux Version 5 (RHEL5) Update 1 or later (with the
required kernel security update)
– SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later (with
the required kernel security update)
To download the Linux kernel security updates:
http://www14.software.ibm.com/webapp/set2/sas/f/pm/component.html
Previous versions of AIX and Linux can participate in inactive partition
migration if the operating systems support virtual devices and IBM Power
Systems POWER6- and POWER7-based servers.

Note: Ensure that the target hardware supports the operating system you
are migrating.

򐂰 Storage requirements
For a list of supported disks and optical devices, see the Virtual I/O Server
data sheet for VIOS:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d
atasheet.html

Chapter 3. Requirements and preparation 51


򐂰 Network requirements
The migrating partition uses the virtual LAN (VLAN) for network access. The
VLAN must be bridged (if there is more than one, then it also has to be
bridged) to a physical network using a shared Ethernet adapter in the Virtual
I/O Server partition. Your LAN must be configured so that migrating partitions
can continue to communicate with other necessary clients and servers after a
migration is completed.

52 IBM PowerVM Live Partition Mobility


3.4 Live Partition Mobility preparation checks
Table 3-2 lists the preparation tasks required for a migration.

Table 3-2 Preparing the environment for Live Partition Mobility


Task Details see page Remarks for inactive
migration

Prepare servers 54 -

Prepare HMC 61 -

Prepare VIOS 63 see Table Note 1

Prepare mobile partition 66 see Table Note 2

Prepare the storage 79 -

Network considerations 87 -

Table Note 1: For inactive migration, you perform fewer preparatory tasks on
the Virtual I/O Server because:
򐂰 You do not have to enable the mover service partition on either the source
or destination Virtual I/O Server.
򐂰 You do not have to synchronize the time-of-day clocks.

Table Note 2: For inactive migration, you have to perform fewer preparatory
tasks on the mobile partition:
򐂰 RMC connections are not required.
򐂰 The mobile partition can have dedicated I/O. These dedicated I/O devices
will be removed automatically from the partition before the migration
occurs.
򐂰 Barrier-synchronization registers can be used in the mobile partition.
򐂰 The mobile partition can use huge pages.
򐂰 The applications do not have to be migration-aware or migration-safe.

Certain settings can be changed dynamically (partition workload groups,


mover service partitions and time reference), but others have to be changed
statically (barrier synchronization registers and redundant error path
reporting).

Chapter 3. Requirements and preparation 53


3.5 Preparing the systems for Live Partition Mobility
Careful planning in order to determine your environment is required before Live
Partition Mobility can be successfully implemented. After you validate all required
versions and levels, this section describes the planning tasks to consider and
complete on the source and destination systems before you migrate a logical
partition, whether it is an inactive partition migration or active partition migration.

3.5.1 HMC
Ensure that the source and destination systems are managed by the same HMC
(or a redundant HMC pair).

Note: HMC Version 7 Release 3.4 introduces an additional migration


scenario. In this case, the source server is managed by one HMC and the
destination server is managed by a different HMC. Additional requirements
include:
򐂰 Both HMCs must be connected to the same network so that they can
communicate with each other.
򐂰 Secure Shell has to be set up correctly between both the source as the
destination HMC with the mkauthkeys command.

For more information about this HMC migration scenario see 5.4, “Remote
Live Partition Mobility” on page 130.

3.5.2 Logical memory block size


Ensure that the logical memory block (LMB) size is the same on the source and
destination systems. The default LMB size depends on the amount of memory
installed in the CEC. It varies between 16 MB and 256 MB. A change to the LMB
size can only be done by a user with the administrator authority, and you must
shut down and restart the managed system for the change to take effect.

54 IBM PowerVM Live Partition Mobility


Figure 3-4 shows how the size of the logical memory block can be modified in the
Performance Setup menu of the Advanced System Management Interface
(ASMI). The ASMI can be launched through the Operations section in the task
list on the HMC.

Figure 3-4 Checking and changing LMB size with ASMI

3.5.3 Battery power


Ensure that the destination system is not running on battery power. If the
destination system is running on battery power, then you need to return the
system to its regular power source before moving a logical partition to it.
However, the source system can be running on battery power.

Chapter 3. Requirements and preparation 55


3.5.4 Available memory
Ensure that the destination system has enough available memory to support the
mobile partition. To determine the available memory on the destination system,
and allocate more memory if necessary, you must have super administrator
authority (a user with the HMC hmcsuperadmin role, such as hscroot). The
following steps have to be completed on the HMC:
1. Determine the amount of memory of the mobile partition on the source
system:
a. In the navigation area, open Systems Management.
b. Select the source system in the navigation area.
c. In the contents area, select the mobile partition and select Properties in
the task list. The Properties window opens.
d. Select the Hardware tab and then the Memory tab.
e. View the Memory section and record the assigned memory settings.
f. Click OK.
Figure 3-5 shows the result of the actions.

Figure 3-5 Checking the amount of memory of the mobile partition

56 IBM PowerVM Live Partition Mobility


2. Determine the memory available on the destination system:
a. In the contents area, select the destination system and select Properties
in the task list.
b. Select the Memory tab.
c. Record the Available memory and Current memory available for partition
usage.
d. Click OK.
Figure 3-6 shows the result of the actions.

Figure 3-6 Available memory on destination system

3. Compare the values from the previous steps:


– If the destination system has enough available memory to support the
mobile partition, skip the rest of this procedure and continue with other
preparation tasks.
– If the destination system does not have enough available memory to
support the mobile partition, you must dynamically free up some memory
(or use the Capacity on Demand (CoD) feature to activate additional
memory, where available) on the destination system before the actual
migration can take place.

Chapter 3. Requirements and preparation 57


3.5.5 Available processors to support Live Partition Mobility
Ensure that the destination system has enough available processors (or
processing units) to support the mobile partition. The profile created on the
destination server matches the source server’s, therefore dedicated processors
must be available on the target if that is what you are using, or enough
processing units in a shared processor pool.

To determine the available processors on the destination system and allocate


more processors if necessary, you must have super administrator authority (a
user with the HMC hmcsuperadmin role, such as hscroot).

Complete the following steps in the HMC:


1. Determine how many processors the mobile partition requires:
a. In the navigation area, expand Systems Management.
b. Select the source system in the navigation area.
c. In the contents area, select the mobile partition and select Properties in
the task list. A new pop-up window called Properties appears.
d. Select the Hardware tab and then the Processors tab.
e. View the Processor section and record the processing units settings.
f. Click OK.

58 IBM PowerVM Live Partition Mobility


Figure 3-7 shows the result of the actions.

Note: In recent HMC levels, p6 appears as POWER6. See Figure 3-7.

Figure 3-7 Checking the number of processing units of the mobile partition

2. Determine the processors available on the destination system:


a. In the contents area, select the destination system and select Properties
in the task list.
b. Select the Processors tab.
c. Record the Available processors available for partition usage.
d. Click OK.

Chapter 3. Requirements and preparation 59


Figure 3-8 shows the result of the actions.

Figure 3-8 Available processing units on destination system

3. Compare the values from the previous steps.


– If the destination system has enough available processors to support the
mobile partition, then skip the rest of this procedure and continue with the
remaining preparation tasks for Live Partition Mobility.
– If the destination system does not have enough available processors to
support the mobile partition, you must dynamically free up processors (or
use the CoD feature, when available) on the destination system before the
actual migration can take place.

60 IBM PowerVM Live Partition Mobility


3.6 Preparing the HMC for Live Partition Mobility
The version and release of the HMC has to be at the correct level for Live
Partition Mobility. See 3.3, “Requirements for Live Partition Mobility” on page 47
and also see 3.5.1, “HMC” on page 54.

Figure 3-9 shows how to check the current version and release of our HMC.

Figure 3-9 Checking the version and release of HMC

Note: Live Partition Mobility requires HMC Version 7 Release 3.2 or higher to
be used. In this publication, we used the latest Version 7 Release 3.4 of the
HMC software (see Figure 3-9 on page 61). You can also verify the current
HMC version, and release and service pack level with the lshmc command.

When using Live Parition Mobility with an HMC managing at least one
POWER7-based server, HMC V7R710 or later is required.

Chapter 3. Requirements and preparation 61


If the HMC is not at the correct version and release, an upgrade is required.
Select Updates (1) and then click Update HMC (2), as shown in Figure 3-10.
Also see Figure 3-11 on page 63.

Figure 3-10 Upgrading the Hardware Management Console

For more information about upgrading the Hardware Management Console, see:
http://www14.software.ibm.com/webapp/set2/sas/f/hmc/home.html

62 IBM PowerVM Live Partition Mobility


After you click OK, the window shown in Figure 3-11 opens.

Figure 3-11 Install Corrective Service to upgrade the HMC

3.7 Preparing the Virtual I/O Servers


Several tasks must be completed to prepare the source and destination Virtual
I/O Servers for Live Partition Mobility. At least one Virtual I/O Server logical
partition must be installed and activated on both the source and destination
systems. For Virtual I/O Server installation instructions, see:
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphb1/iphb1.pdf

Chapter 3. Requirements and preparation 63


3.7.1 Virtual I/O Server version
Ensure that the source and destination Virtual I/O Servers are at Version 1.5.1.1
or higher. This can be checked on the Virtual I/O Server by running the ioslevel
command, as shown in Example 3-1.

Example 3-1 Output of the ioslevel command


$ ioslevel
2.1.0.1-FP-20.0
$

If the source and destination Virtual I/O Servers do not meet the requirements,
perform an upgrade.

3.7.2 Mover service partition


Ensure that at least one of the mover service partitions (MSP) is enabled on a
source and destination Virtual I/O Server partition. The mover service partition is
a Virtual I/O Server logical partition that is allowed to use its VASI adapter for
communicating with the POWER Hypervisor.

There must be at least one mover service partition on both the source and
destination Virtual I/O Servers for the mobile partition to participate in active
partition migration. If the mover service partition is disabled on either the source
or destination Virtual I/O Server, the mobile partition can be migrated inactively.

To enable the source and destination mover service partitions using the HMC,
you must have super administrator (such as hmcsuperadmin, as in the hscroot
login) authority and complete the following steps:
1. In the navigation area, open Systems Management and select Servers.
2. In the contents area, open the source system.
3. Select the source Virtual I/O Server logical partition and select Properties on
the task area.
4. On the General tab, select Mover Service Partition, and click OK.
5. Repeat these steps for the destination system.

64 IBM PowerVM Live Partition Mobility


Figure 3-12 shows the result of these actions.

Figure 3-12 Enabling mover service partition

3.7.3 Synchronize time-of-day clocks


Another recommended, although optional, task for active partition migration is
the synchronization of the time-of-day clocks for the source and destination
Virtual I/O Server partitions.

If you choose not to complete this step, the source and destination Virtual I/O
Servers synchronize the clocks while the mobile partition is moving from the
source system to the destination system. Completing this step before the mobile
partition is moved can prevent possible errors.

To synchronize the time-of-day clocks on the source and destination Virtual I/O
Servers using the HMC, you must be a super administrator (such as hscroot) to
complete the following steps:
1. In the navigation area, open Systems Management.
2. Select Servers and select the source system.
3. In the contents area, select the source Virtual I/O Server logical partition.
4. Click on Properties.
5. Click the Settings tab.
6. For Time reference, select Enabled and click OK.
7. Repeat the previous steps on the destination system for the destination
Virtual I/O Server.

Chapter 3. Requirements and preparation 65


Figure 3-13 shows the time-of-day synchronization.

Figure 3-13 Synchronizing the time-of-day clocks

Note: After the Virtual I/O Server infrastructure is configured, a backup of the
Virtual I/O Servers is recommended; this approach produces an established
checkpoint prior to migration.

3.8 Preparing the mobile partition for mobility


This section describes the tasks that you must complete to prepare a mobile
partition for Live Partition Mobility in order to have a successful migration.

3.8.1 Operating system version


Ensure that the operating system meets the requirements for Live Partition
Mobility. These requirements can be found in 3.3, “Requirements for Live
Partition Mobility” on page 47.

3.8.2 RMC connections


For active partition migration, ensure that Resource Monitoring and Control
(RMC) connections are established.

66 IBM PowerVM Live Partition Mobility


RMC can be configured to monitor resources and perform an action in response
to a defined condition. The flexibility of RMC enables you to configure response
actions or scripts that manage general system conditions with little or no
involvement from the system administrator.

To establish an RMC connection for the mobile partition, you must be a super
administrator (a user with the HMC hmcsuperadmin role, such as hscroot) on the
HMC and complete the following steps:
1. Sign on to the operating system of the mobile partition with root authority.
2. From the command line, enter the following command to check if the RMC
connection is established:
lsrsrc IBM.ManagementServer
This command is shown in Example 3-2.

Example 3-2 Checking IBM.ManagementServer resource


# lsrsrc IBM.ManagementServer
Resource Persistent Attributes for IBM.ManagementServer
resource 1:
Name = "9.3.5.180"
Hostname = "9.3.5.180"
ManagerType = "HMC"
LocalHostname = "9.3.5.115"
ClusterTM = "9078-160"
ClusterSNum = ""
ActivePeerDomain = ""
NodeNameList = {"mobile"}
resource 2:
Name = "9.3.5.128"
Hostname = "9.3.5.128"
ManagerType = "HMC"
LocalHostname = "9.3.5.115"
ClusterTM = "9078-160"
ClusterSNum = ""
ActivePeerDomain = ""
NodeNameList = {"mobile"}
#

– If the command output includes ManageType = "HMC", then the RMC


connection is established, and you can skip 3 on page 68, and continue
with the additional preparation tasks by going to 3.8.3, “Disable redundant
error path reporting” on page 68.
– If you received a message indicating that there is no
IBM.ManagementServer resource or that ManagerType does not equal
HMC, then continue to the next step.

Chapter 3. Requirements and preparation 67


3. Establish the RMC connection specifically for your operating system:
– For AIX, see Configuring Resource Monitoring and Control (RMC) for the
Partition Load Manager, found at:
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.j
sp?topic=/iphbk/iphbkrmc_configuration.htm
– For Linux, install the RSCT utilities. Download these tools from the Service
and productivity tools Web site (and select the appropriate HMC- or
IVM-managed servers link):
http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
• Red Hat Enterprise Linux: Install additional software (RSCT Utilities)
for Red Hat Enterprise Linux on HMC managed servers.
• SUSE Linux Enterprise Server: Install additional software (RSCT
Utilities) for SUSE Linux Enterprise Server on HMC managed servers.

3.8.3 Disable redundant error path reporting


Ensure that the mobile partition is not enabled for redundant error path reporting.

Redundant error path reporting allows a logical partition to report server common
hardware errors and partition hardware errors to the HMC. Redundant error path
reporting must be disabled if you want to migrate a logical partition.

To disable redundant error path reporting for the mobile partition, you must be a
super administrator and complete the following steps:
1. In the navigation area, open Systems Management.
2. Select Servers and select the source system.
3. In the contents area, select the logical partition you wish to migrate and select
Configuration  Manage Profiles.
4. Select the active logical partition profile and select Edit from the Actions
menu.
5. Click the Settings tab.
6. Deselect Enable redundant error path reporting, and click OK.
7. Because disabling redundant error path reporting cannot be done
dynamically, you have to shut down the mobile partition, then power it on
using the profile with the modifications.

68 IBM PowerVM Live Partition Mobility


Figure 3-14 shows the disabled redundant error path handling:

Figure 3-14 Disable redundant error path handling

3.8.4 Virtual serial adapters


Ensure that the mobile partition is not using a virtual serial adapter in slots higher
than slot 1.

Virtual serial adapters are often used for virtual terminal connections to the
operating system. The first two virtual serial adapters (slots 0 and 1) are reserved
for the HMC. For a logical partition to participate in a partition migration, it cannot
have any required virtual serial adapters, except for the two reserved for the
HMC.

To dynamically disable unreserved virtual serial adapters using the HMC, you
must be a super administrator and complete the following steps:
1. In the navigation area, expand Systems Management.
2. Select Servers and select the source system.
3. In the contents area, select the logical partition to migrate and select
Configuration  Manage Profiles.

Chapter 3. Requirements and preparation 69


4. Select the active logical partition profile and select Edit from the Actions
menu.
5. Select the Virtual Adapter tab.
6. If there are more than two virtual serial adapters listed, then ensure that the
adapters in slots 2 and higher are not selected as Required.
7. Click OK.

Figure 3-15 shows the result of the steps.

Figure 3-15 Verifying the number of serial adapters on the mobile partition

3.8.5 Partition workload groups


Ensure that the mobile partition is not part of a logical partition group.

A partition workload group identifies a set of partitions that reside on the same
system. The partition profile specifies the name of the partition workload group to
which it belongs, if applicable. For a logical partition to participate in a partition
migration, it cannot be assigned to a partition workload group.

To dynamically remove the mobile partition from a partition workload group, you
must be a super administrator on the HMC and complete the following steps:
1. In the navigation area, expand Systems Management  Servers.
2. In the contents area, open the source system.

70 IBM PowerVM Live Partition Mobility


3. Select the mobile partition and select Properties.
4. Click the Other tab.
5. In the Workload group field, select (None).
6. In the contents area, open the mobile partition and select Configuration 
Manage Profiles.
7. Select the active logical partition profile and select Edit from the Actions
menu.
8. Click the Settings tab.
9. In the Workload Management area, select (None) and click OK.
10.Repeat the last three steps for all partition profiles associated with the mobile
partition.

Figure 3-16 and Figure 3-17 on page 72 show the tabs for the disablement of the
partition workload group (both in the partition and in the partition profiles).

Figure 3-16 Disabling partition workload group - Other tab

Chapter 3. Requirements and preparation 71


Figure 3-17 Disabling partition workload group - Settings tab

3.8.6 Barrier-synchronization register


Ensure that the mobile partition is not using barrier-synchronization register
(BSR) arrays.

BSR is a memory register that is located on certain POWER technology-based


processors. A parallel-processing application running on AIX can use a BSR to
perform barrier synchronization, which is a method for synchronizing the threads
in the parallel-processing application. For a logical partition to participate in
active partition migration, it cannot use BSR arrays. However, it can still
participate in inactive partition migration if it uses BSR.

To disable BSR for the mobile partition using the HMC, you must be a super
administrator and complete the following steps:
1. In the navigation area, expand Systems Management.
2. Select Servers.
3. In the contents area, open the source system.
4. Select the mobile partition and select Properties.

72 IBM PowerVM Live Partition Mobility


5. Click the Hardware tab.
6. Click the Memory tab.
– If the number of BSR arrays equals zero, the mobile partition can
participate in inactive or active migration. This is shown on Figure 3-18.
You can now continue with additional preparatory tasks for the mobile
partition.

Figure 3-18 Checking the number of BSR arrays on the mobile partition

– If the number of BSR arrays is not equal to zero, take one of the following
actions:
• Perform an inactive migration instead of an active migration. Skip the
remaining steps and see 2.4, “Inactive partition migration” on page 27.
• Click OK and continue to the next step to prepare the mobile partition
for an active migration.
7. In the contents area, open the mobile partition and select Configuration 
Manage Profiles.
8. Select the active logical partition profile and select Edit from the Actions
menu.
9. Click the Memory tab.
10.Enter 0 in the BSR arrays for this profile field and click OK. This is shown in
Figure 3-19 on page 74.

Chapter 3. Requirements and preparation 73


11.Because modifying BSR cannot be done dynamically, you have to shut down
the mobile partition, then power it on by using the profile with the BSR
modifications.

Figure 3-19 Setting number of BSR arrays to zero

3.8.7 Huge pages


Ensure that the mobile partition is not using huge pages.

Huge pages can improve performance in specific environments that require a


high degree of parallelism, such as in DB2® partitioned database environments.
You can specify the minimum, desired, and maximum number of huge pages to
assign to a partition when you create a partition profile. For a logical partition to
participate in active partition migration, it cannot use huge pages. However, if the
mobile partition does use huge pages, it can still participate in inactive partition
migration.

74 IBM PowerVM Live Partition Mobility


To configure huge pages for the mobile partition using the HMC, you must be a
super administrator and complete the following steps:
1. Open the source system and select Properties.
2. Click the Advanced tab.
– If the current huge page memory equals zero (0), shown in Figure 3-20,
skip the remaining steps of this procedure and continue with additional
preparatory tasks for the mobile partition in 3.8.8, “Physical or dedicated
I/O” on page 76.

Figure 3-20 Checking if huge page memory equals zero

– If the current huge page memory is not equal to 0, take one of the
following actions:
• Perform an inactive migration instead of an active migration. Skip the
remaining steps and see 2.4, “Inactive partition migration” on page 27.
• Click OK and continue with the next step to prepare the mobile partition
for an active migration.
3. In the contents area, open the mobile partition and select Configuration 
Manage Profiles.
4. Select the active logical partition profile and select Edit from the Actions
menu.

Chapter 3. Requirements and preparation 75


5. Click the Memory tab.
6. Enter 0 in the field for desired huge page memory, and click OK. This is
shown in Figure 3-21.
7. Because changing huge pages cannot be done dynamically, you have to shut
down the mobile partition, then turn it on by using the profile with the
modifications.

Figure 3-21 Setting Huge Page Memory to zero

3.8.8 Physical or dedicated I/O


Ensure that the mobile partition does not have physical or dedicated (required)
I/O adapters and devices.

For a logical partition to participate in active partition migration, it cannot have


any required or physical I/O. All I/O must be virtual. If the mobile partition has
required or physical I/O, it can participate in inactive partition migration. After

76 IBM PowerVM Live Partition Mobility


migration, the required or physical I/O configuration must be verified. Physical I/O
marked as desired can be removed dynamically with a dynamic LPAR operation.

To remove required I/O from the mobile partition using the HMC, you must be a
super administrator and complete the following steps:
1. In the navigation area, expand Systems Management.
2. Select Server and select the source system.
3. In the contents area, open the mobile partition and select Configuration 
Manage Profiles.
4. Select the active logical partition profile and select Edit from the Actions
menu.
5. Click the I/O tab. See Figure 3-22. Note the following information.
– If Required is not selected for any resource, skip the remainder of this
procedure and continue with additional preparatory tasks for the mobile
partition, in 3.8.9, “Name of logical partition profile” on page 78.

Figure 3-22 Checking if there are required resources in the mobile partition

Chapter 3. Requirements and preparation 77


– If Required is selected for any resource, take one of the following actions:
• Perform an inactive migration instead of an active migration. Skip the
remaining steps and see 2.4, “Inactive partition migration” on page 27.
• Continue with the next step to prepare the mobile partition for an active
migration.
6. For each resource that is selected as Required, deselect Required and
click OK.
7. Shut down the mobile partition, then turn it on by using the profile with the
required I/O resource modifications.

Note: You must also verify that no Logical Host Ethernet Adapter (LHEA)
devices are configured, because these are also considered as physical I/O.
Inactive migration is still possible if LHEA are configured.

Figure 3-23 shows you how to verify whether a LHEA is configured for the mobile
partition. First, select an IVE physical port to define a LHEA, and then verify
whether there are logical port IDs. If no logical port ID is in this column, then no
logical Host Ethernet Adapter is configured for this partition. More information
about Integrated Virtual Ethernet adapters can be found in the Integrated Virtual
Ethernet Adapter Technical Overview and Introduction, REDP-4340 publication.

Figure 3-23 Logical Host Ethernet Adapter

3.8.9 Name of logical partition profile


Determine the name of the logical partition profile for the mobile partition on the
destination system. This is an optional step. As part of the migration process, the
HMC creates a new migration profile containing the partition’s current state.
Unless you specify a profile name when you start the migration, this profile

78 IBM PowerVM Live Partition Mobility


replaces the existing profile that was last used to activate the partition. Also, if
you specify an existing profile name, the HMC replaces that profile with the new
migration profile. If you do not want the migration profile to replace any of the
partition’s existing profiles, you must specify a unique profile name. The new
profile contains the partition’s current configuration and any changes that are
made during the migration.

3.8.10 Mobility-safe or mobility-aware


Ensure that the applications running in the mobile partition are mobility-safe or
mobility-aware. For more information, see:
򐂰 5.8, “Migration awareness” on page 177
򐂰 5.9, “Making applications migration-aware” on page 178

3.8.11 Changed partition profiles


If you changed any partition profile attributes, shut down and activate the new
profile so that the new values can take effect:
1. In the contents area, select the mobile partition, click Operations, and click
Shut down.
2. In the contents area, select the mobile partition, click Operations, click
Activate, and select the logical partition profile.

3.9 Configuring the external storage


This section describes the tasks that you must complete to ensure your storage
configuration meets the minimal configuration for Live Partition Mobility before
you can actually migrate your logical partition.

To configure external storage:


1. Verify that the same SAN disks used as virtual disks by the mobile partition
are assigned to the source and destination Virtual I/O Server logical
partitions.
2. Verify, with the lsdev command, that the reserve_policy attributes on the
shared physical volumes are set to no_reserve on the Virtual I/O Servers:
– To list all the disks, type the following command:
lsdev -type disk

Chapter 3. Requirements and preparation 79


– To list the attributes of hdiskX, type the following command:
lsdev -dev hdiskX -attr
– If reserve_policy is not set to no_reserve, use the following command:
chdev -dev hdiskX -attr reserve_policy=no_reserve
3. Verify that the physical volume has the same unique identifier, physical
identifier, or an IEEE volume attribute. These identifiers are required in order
to export a physical volume as a virtual device.
– To list disks with a unique identifier (UDID):
i. Type the oem_setup_env command on the Virtual I/O Server CLI.
ii. Type the odmget -qattribute=unique_id CuAt command to list the
disks that have a UDID. See Example 3-3.

Example 3-3 Output of odmget command


CuAt:
name = "hdisk6"
attribute = "unique_id"
value = "3E213600A0B8000291B080000520E023C6B8D0F1815 FAStT03IBMfcp"
type = "R"
generic = "D"
rep = "nl"
nls_index = 79

CuAt:
name = "hdisk7"
attribute = "unique_id"
value = "3E213600A0B8000114632000073244919ADCA0F1815 FAStT03IBMfcp"
type = "R"
generic = "D"
rep = "nl"
nls_index = 79

iii. Type exit to return to the Virtual I/O Server prompt.

80 IBM PowerVM Live Partition Mobility


– To list disks with a physical identifier (PVID):
i. Type the lspv command to list the devices with a PVID. See
Example 3-4. If the second column has a value of none, the physical
volume does not have a PVID. A recommendation is to put a PVID on
the physical volume before it is exported as a virtual device.

Example 3-4 Output of lspv command


$ lspv
NAME PVID VG STATUS
hdisk0 00c1f170d7a97dec rootvg active
hdisk6 00c0f6a0915fc126 None
hdisk7 00c0f6a08de5008b None

ii. Type the chdev command to put a PVID on the physical volume in the
following format:
chdev - dev physicalvolumename -attr pv=yes -perm
– To list disks with an IEEE volume attribute identifier, issue the following
command (in the shell oem_setup_env):
lsattr -El hdiskX
4. Verify that the mobile partition has access to a source Virtual I/O Server
virtual SCSI adapter. You have to verify the configuration of the virtual SCSI
adapters on the mobile partition and the source Virtual I/O Server logical
partition to ensure that the mobile partition has access to storage. You must
be a super administrator (such as hscroot) to complete the following steps:
a. Verify the virtual SCSI adapter configuration of the mobile partition:
i. In the navigation area, open Systems Management.
ii. Click Servers.
iii. In the contents area, open the source system.
iv. Select the mobile partition and click Properties.
v. Click the Virtual Adapters tab.
vi. Record the Slot ID and Remote Slot ID for each virtual SCSI adapter.
vii. Click OK.

Chapter 3. Requirements and preparation 81


The result of these steps is shown in Figure 3-24.

Figure 3-24 Virtual SCSI client adapter

b. Verify the virtual SCSI adapter configuration of the source Virtual I/O
Server virtual SCSI adapter:
i. In the navigation area, open Systems Management.
ii. Click Servers.
iii. In the contents area, open the source system.
iv. Select the Virtual I/O Server logical partition and click Properties.
v. Click the Virtual Adapters tab.
vi. Verify that the Slot ID corresponds to the Remote Slot ID that you
recorded (in step vi on page 81) for the virtual SCSI adapter on the
mobile partition.
vii. Verify that the Remote Slot ID is either blank or that it corresponds to
the Slot ID that you recorded (in step vi on page 81) for the virtual SCSI
adapter on the mobile partition.
viii.Click OK.

82 IBM PowerVM Live Partition Mobility


The result of these steps is shown in Figure 3-25.

Figure 3-25 Virtual SCSI server adapters

Chapter 3. Requirements and preparation 83


c. If the values are incorrect, plan the slot assignments and connection
specifications for the virtual SCSI adapters by using a worksheet similar to
the one in Table 3-3.

Table 3-3 Virtual SCSI adapter worksheet


Virtual SCSI adapter Slot number Connection specification

Source Virtual I/O Server virtual


SCSI adapter

Destination Virtual I/O Server


virtual SCSI adapter

Mobile partition Virtual SCSI


adapter on source system

Mobile partition Virtual SCSI


adapter on destination system

When all virtual SCSI adapters on the source Virtual I/O Server logical
partition allow access to virtual SCSI adapters of every logical partition
(not only the mobile partition), you have two solutions:
• You may create a new virtual SCSI server adapter on the source Virtual
I/O Server and allow only the virtual SCSI client adapter on the mobile
partition to access it.
• You may change the connection specifications of a virtual SCSI server
adapter on the source Virtual I/O Server so that it allows access to the
virtual SCSI adapter on the mobile partition. This means that the virtual
SCSI adapter of the client logical partition that currently has access to
the virtual SCSI adapter on the source Virtual I/O Server will no longer
have access to the adapter.
5. Verify that the destination Virtual I/O Server has sufficient free virtual slots to
create the virtual SCSI adapters required to host the mobile partition in order
to create a virtual SCSI adapter after it moves to the destination system. To
verify the virtual SCSI configuration using the HMC, you must be a super
administrator (such as hscroot) to complete the following steps:
a. In the navigation area, open Systems Management.
b. Select Servers.
c. In the contents area, open the destination system.
d. Select the destination Virtual I/O Server logical partition and click
Properties.

84 IBM PowerVM Live Partition Mobility


e. Select the Virtual Adapters tab and compare the number of virtual
adapters to the maximum virtual adapters. This is shown in Figure 3-26,

Figure 3-26 Checking free virtual slots.

– If, after verification, the number of maximum virtual adapters is higher or


equal to the number of virtual adapters plus the number of virtual SCSI
adapters required to host the migrating partition, you can continue with
additional preparatory tasks at step 6 on page 86.
– If the maximum virtual adapter value does not allow the addition of
required virtual SCSI adapters for the mobile partition, then you have to
modify its partition profile by completing the following steps:
i. In the navigation area, open Systems Management.
ii. Select Servers.
iii. In the contents area, open the destination system.
iv. Select the destination Virtual I/O Server logical partition
v. Click in the task area on configuration, click Manage profiles
vi. Select the active logical partition profile and select Edit from the
Actions menu.
vii. Click the Virtual Adapters tab and modify (increase) the number of
maximum virtual adapters. You must shut down and restart the logical
partition for the change to take effect.

Chapter 3. Requirements and preparation 85


6. Verify that the mobile partition has access to the same physical storage on the
storage area network from both the source and destination environments.
This requirement has to be fulfilled for Live Partition Mobility to be successful.
– In the source environment, check that the following connections exist:
• A virtual SCSI client adapter on the mobile partition must have access
to a virtual SCSI adapter on the source Virtual I/O Server logical
partition.
• That virtual SCSI server adapter on the source Virtual I/O Server
logical partition must have access to a remote storage adapter on the
source Virtual I/O Server logical partition.
• That remote storage adapter on the source Virtual I/O Server logical
partition must be connected to a storage area network and have
access to some physical storage in the network.
– In the destination environment, check that a remote storage adapter on
the destination Virtual I/O Server logical partition has access to the same
physical storage as the source Virtual I/O Server logical partition.
To verify the virtual adapter connections by using the HMC, you must be a
super administrator and complete the following steps:
a. Select Systems Management.
b. Select Servers.
c. In the contents area, open the source system and select a mobile
partition.
d. Select the mobile partition, select Hardware Information, select Virtual
I/O Adapters, and select SCSI.
e. Verify all the information and click OK. The result is shown in Figure 3-27.

Figure 3-27 The Virtual SCSI Topology of the mobile partition

• If the information is correct, go to step f on page 87.


• If the information is incorrect, return to the beginning of this section and
complete the task associated with the incorrect information.

86 IBM PowerVM Live Partition Mobility


f. In the contents area, open the destination system.
g. Select the destination Virtual I/O Server logical partition.
h. Select Hardware Information, select Virtual I/O Adapters, and select
SCSI.
i. Verify the information and click OK.
7. Verify that the mobile partition does not have physical or required I/O adapters
and devices. This is only an issue for active partition migration. If you want to
perform an active migration, you must move the physical or required I/O from
the mobile partition, as explained in 3.8.8, “Physical or dedicated I/O” on
page 76.
8. All profile changes on the mobile partition’s profile must be activated before
starting the migration so that the new values can take effect:
a. If the partition is not activated, it must be powered on. It is sufficient to
activate the partition to the SMS menu.
b. If the partition is active, you can shut it down and power on the partition
again by using the changed logical partition profile.

3.10 Network considerations


You must prepare and configure the network for partition migration. You must
complete several tasks to ensure that your network configuration meets the
minimal configuration for Live Partition Mobility.

You first have to create a shared Ethernet adapter on the Virtual I/O Server using
the HMC so that the client logical partitions can access the external network
without requiring a physical Ethernet adapter. Shared Ethernet adapters are
required on both source and destination Virtual I/O Servers for all the external
networks used by mobile partitions. If you plan to use a shared Ethernet adapter
(SEA) with an Integrated Virtual Ethernet (IVE) adapter, ensure that the physical
port of this IVE adapter is set to promiscuous mode for the Virtual I/O Server. If
the IVE is put in promiscuous mode, it can only be used by a single LPAR. For
more information about IVE, Integrated Virtual Ethernet Adapter Technical
Overview and Introduction, REDP-4340.

Notes: Link Aggregation or EtherChannel can also be used as the shared


Ethernet adapter.

If you plan to use the Integrated Virtual Ethernet adapter with the shared
Ethernet adapter, ensure that you use the logical host Ethernet adapter to
create the shared Ethernet adapter.

Chapter 3. Requirements and preparation 87


Perform the following steps on the source and destination Virtual I/O Servers:
1. Ensure that you connect the source and destination Virtual I/O Servers and
the shared Ethernet adapter to the network.
2. Configure virtual Ethernet adapters for the source and destination Virtual I/O
Server partitions. If virtual switches are available, be sure that the virtual
Ethernet adapters on the source Virtual I/O Server is configured on a virtual
switch that has the same name of the virtual switch that is used on the
destination Virtual I/O Server.
3. Ensure that the mobile partition has a virtual Ethernet adapter created by
using the HMC GUI.
4. Activate the mobile partition to establish communication between its virtual
Ethernet and the Virtual I/O Servers virtual Ethernet.
5. Verify that the operating system on the mobile partition sees the new Ethernet
adapter by using the following command:
lsdev -Cc adapter
6. Check that the client partition can access the external network. To check,
configure the TCP/IP connections for the virtual adapters on the client logical
partitions by using the client partitions’ operating systems (AIX or Linux), by
using the following command:
mktcpip -h hostname -a IPaddress -i interface -g gateway

3.11 Distance considerations


There are no architected maximum distances between systems for Live Partition
Mobility. The maximum distance is dictated by the network and storage
configuration used by the systems. Provided both systems are on the same
network, are connected to the same shared storage, and are managed by the
same HMC, then Live Partition Mobility will work. Standard long-range network
and storage performance considerations apply.

88 IBM PowerVM Live Partition Mobility


4

Chapter 4. Basic partition migration


scenario
This chapter introduces the basics of configuring a Live Partition Mobility
environment on IBM POWER6 technology-based servers using a Hardware
Management Console (HMC)-based configuration. The chapters shows the
detailed steps to migrate a logical partition from a source system to a destination
system in a single flow.

The chapter contains the following topics:


򐂰 4.1, “Basic Live Partition Mobility environment” on page 90
򐂰 4.2, “Virtual IO Server attributes” on page 93
򐂰 4.3, “Preparing for an active partition migration” on page 94
򐂰 4.4, “Migrating a logical partition” on page 99

© Copyright IBM Corp. 2007, 2009. All rights reserved. 89


4.1 Basic Live Partition Mobility environment
This section shows you a simple configuration for Live Partition Mobility using an
HMC and virtual SCSI disk. A single Virtual I/O Server partition is configured on
the source destination systems. Using the configuration in Figure 4-1, we explain
the basic components involved in Live Partition Mobility, and guide you through
the configuration and execution tasks.

Live Partition Mobility using N_Port ID Virtualization and virtual Fibre Channel
features are covered in 5.11, “Virtual Fibre Channel” on page 187. Live Partition
Mobility using Integrated Virtualization Manager is covered in Chapter 7,
“Integrated Virtualization Manager for Live Partition Mobility” on page 221.

POWER6 System #1 POWER6 System #2


(Source system) (Destination system)
AIX Client Partition 1 AIX Client Partition 1
(Mobile partition) Active / Inactive
Partition Migration
hdisk0

vscsi0 ent0

POWER POWER
Hypervisor Hypervisor

vhost0 ent1 ent1


virt virt
Virtual I/O Server

Virtual I/O Server


vtscsi0 hdiskX ent2 en2 en2 ent2
SEA if if SEA

fcs0 ent0 ent0 fcs0

HMC Ethernet Network

Storage Area Network

Shared Disk LUN


(Storage Device)
Physical
Volume LUN

Figure 4-1 Basic Live Partition Mobility configuration

90 IBM PowerVM Live Partition Mobility


4.1.1 Minimum requirements
The following minimum requirements are necessary for configuring a basic Live
Partition Mobility environment:
򐂰 Hardware Management Console (HMC)
Both the source and the destination systems must be managed by Hardware
Management Console.
򐂰 Same logical memory block size
The logical memory block size must be the same on the source and the
destination system. You can check and update the logical memory block
(LMB) size by using the Advanced System Management Interface. To change
the LMB size, see the steps in 3.5.2, “Logical memory block size” on page 54.
򐂰 Mobile partition
A logical partition that is migrated from the source to the destination system
must use virtual SCSI or virtual Fibre Channel disks only. Disks must be
mapped to physical storage that is actually located outside the source and the
destination systems. No internal disks and no dedicated storage adapters are
allowed.
If you want to migrate a running logical partition, the partition must use virtual
Ethernet adapters and virtual SCSI or virtual Fibre Channel disks provided by
a Virtual I/O Server partition, and must not be assigned any physical adapter.
򐂰 Virtual I/O Server partition
At least one Virtual I/O Server partition must be installed and activated on
both the source and the destination systems.
Virtual SCSI adapter requirements:
– On the source Virtual I/O Server partition, do not set the adapter as
required and do not select Any client partition can connect when you
create a virtual SCSI adapter. The virtual SCSI adapter must be solely
accessible by the client adapter of the mobile partition.
– On the destination Virtual I/O Server partition, do not create any virtual
SCSI adapters for the mobile partition. These are created automatically by
the migration function.
See Chapter 2 in PowerVM Virtualization on IBM System p: Managing and
Monitoring, SG24-7590 for details about virtual Fibre Channel configuration.
򐂰 Network connection
The mobile partition and the Virtual I/O Server partitions on the source and
destination systems must be reachable from the HMC.

Chapter 4. Basic partition migration scenario 91


For a migration of running partitions (active migration), both the source and
destination Virtual I/O Server partitions must be able to communicate with
each other to transfer the mobile partition state. We suggest you use a
dedicated network that has1 Gbps bandwidth, or more.
򐂰 Shared disks
One or more shared disks must be connected to the source and destination
Virtual I/O Server partitions. At least one physical volume that is mapped by
the Virtual I/O Server to a LUN on external SAN storage must be attached to
the mobile partition.
The reserve_policy attribute of all the physical volumes belonging to the
mobile partition must be set as no_reserve on the source and destination
Virtual I/O Server partitions. When using virtual SCSI disks, you change this
attribute by using the chdev command on the Virtual I/O Server partition:
$ chdev -dev hdiskX -attr reserve_policy=no_reserve
򐂰 Power supply
The destination system must be running on a regular power source. If the
destination system is running on a battery power, return the system to its
regular power source before migrating a partition.

4.1.2 Inactive partition migration


An inactive partition migration moves a powered-off partition from the source to
the destination system together with its partition profile.

If a mobile partition has dedicated I/O adapters, it can only participate in the
inactive partition migration. However, even in that case, the dedicated adapters
are automatically removed from the partition profile so that the partition will boot
with only virtual I/O resources after migration. If you have to use dedicated I/O
adapters on the mobile partition after the migration, update the mobile partition’s
profile before booting, add adapters to the mobile partition by using dynamic
LPAR operations, or make available the desired resources using other means.

Note: A good practice is to record the existing configuration at this point


because the profile will be changed during a migration. This record can be
used on the destination system to reconfigure any dedicated adapters.

92 IBM PowerVM Live Partition Mobility


4.1.3 Active partition migration
An active partition migration moves a running logical partition, including its
operating system and applications, from the source to the destination system
without disrupting the services of that partition.

The following requirements must be met for the active partition migration, in
addition to the requirements listed in 4.1.1, “Minimum requirements” on page 91:
򐂰 On both the source and the destination Virtual I/O Server partitions, one
mover service partition is enabled and one automatically configured Virtual
Asynchronous Services Interface (VASI) device is available.
򐂰 No physical or dedicated I/O adapters are assigned to the mobile partition.
򐂰 Resource Monitoring and Control (RMC) connection must exist between the
mobile partition and the HMC is active.

Note: Any virtual TTY sessions will be disconnected during the migration, but
can be reestablished on the destination system by the user after migration.

4.2 Virtual IO Server attributes


For the Live Partition Mobility function, two attributes and one virtual device have
been added to the Virtual I/O Server. See 2.1, “Live Partition Mobility
components” on page 20 for more detailed information.

4.2.1 Mover service partition


The mover service partition is a Virtual I/O Server attribute. At least one mover
service partition must be on each of the source and the destination systems for a
mobile partition to participate in an active partition migration. The mover service
partition is not required for inactive partition migration.

4.2.2 Virtual Asynchronous Services Interface device


The Virtual Asynchronous Services Interface (VASI) device must be available on
both the source and the destination Virtual I/O Servers for the mobile partition to
participate in an active partition migration. The VASI device is not required for
inactive partition migration.

Chapter 4. Basic partition migration scenario 93


Important: Configuring the VASI device is not required. The VASI device is
automatically created and configured when the Virtual I/O Server is installed.

4.2.3 Time reference


The Time reference is an attribute of partitions, including Virtual I/O Server
partitions. This partition attribute is only supported on managed systems that are
capable of active partition migration.

Synchronizing the time-of-day clocks for the source and destination Virtual I/O
Server partitions is optional for both active and inactive partition migration.
However, it is a recommended step for active partition migration. If you choose
not to complete this step, the source and destination systems will synchronize
the clocks while the mobile partition is moving from the source system to the
destination system.

4.3 Preparing for an active partition migration


This section shows how to enable the mover service partition. The mover service
partition device is required for active partition migration only.

4.3.1 Enabling the mover service partition


You can set the mover service partition attribute at the time you create a Virtual
I/O Server partition, or dynamically for a running Virtual I/O Server.

To set the mover service partition attribute during the creation of a Virtual I/O
Server partition:
1. In the navigation pane, expand Systems Management  Servers, and
select the system on which you want to create a new Virtual I/O Server
partition.

94 IBM PowerVM Live Partition Mobility


2. In the Tasks pane, expand Configuration  Create Logical Partition, and
select VIO Server, as shown in Figure 4-2, to start the Create LPAR Wizard.

Figure 4-2 Hardware Management Console Workplace

Chapter 4. Basic partition migration scenario 95


3. Enter the partition name, change the ID if you want to, and check the Mover
service partition box on the Create Lpar Wizard window. See Figure 4-3.

Figure 4-3 Create LPAR Wizard window

4. The mover service partition will be activated with the partition. Proceed with
the remaining steps of the Virtual I/O Server partition creation.

You can also set the mover service partition attribute dynamically for an existing
Virtual I/O Server partition while the partition is in the Running state.
1. In the navigation pane, expand Systems Management  Servers, and
select the desired system.
2. In the Contents pane (the top right of the Hardware Management Console
Workplace), select the Virtual I/O Server for which you want to enable the
mover service partition attribute.

96 IBM PowerVM Live Partition Mobility


3. Click view popup menu button and select Configuration  Properties, as
shown in Figure 4-4.

“view popup menu”


button

Figure 4-4 Changing the Virtual I/O Server partition property

4. Check the Mover service partition box on the General tab in the Partition
Properties window, and click OK. See Figure 4-5.

Figure 4-5 Enabling the Mover service partition attribute

Chapter 4. Basic partition migration scenario 97


4.3.2 Enabling the Time reference
After creating a partition, you can optionally enable the Time reference attribute
of the partition, as follows:
1. In the navigation pane, expand Systems Management  Servers, and
select the system that has the partition for which you want to enable the Time
reference attribute.
2. In the Contents pane (the top right of the Hardware Management Console
Workplace), select the partition. In this example, select the Virtual I/O Server
partition.
3. Click view popup menu button and select Configuration  Properties.
4. Select the Settings tab, select Enabled for the Time reference attribute, and
click OK, as shown in Figure 4-6.

Figure 4-6 Enabling the Time reference attribute

98 IBM PowerVM Live Partition Mobility


4.4 Migrating a logical partition
This section shows how to migrate a logical partition that is called a mobile
partition from the source to the destination system. It also provides examples,
especially of an active migration.

The main steps for the migration are:


1. Perform the validation steps and eliminate errors.
2. Perform inactive or active migration.
3. Migrate the mobile partition.

4.4.1 Performing the validation steps and eliminating errors


Before performing a migration, you should follow the validation steps. These
steps are optional but can help to eliminate errors. You can perform the validation
steps by using the HMC GUI or CLI. In this section, we show the GUI steps. For
information about the CLI, see 5.7.1, “The migrlpar command” on page 163.
1. In the navigation pane, expand Systems Management  Servers, and
select the source system.
2. In the contents pane (the top right of the Hardware Management Console
Workplace), select the partition that you will migrate to the destination system.

Chapter 4. Basic partition migration scenario 99


3. Click view popup menu button and select Operations  Mobility 
Validate, as shown in Figure 4-7, to start the validation process.

“view popup menu”


button

Figure 4-7 Validate menu on the HMC

100 IBM PowerVM Live Partition Mobility


4. Select the destination system, specify Destination profile name and Wait
time, and then click the Validate button (Figure 4-8).

Figure 4-8 Selecting the Remote HMC and Destination System

If you are proceeding with this step when the mobile partition is in the Not
Activated state, the destination and source mover service partition and wait
time entries do not appear, because these are not required for the inactive
partition migration.

Note: Figure 4-8 on page 101 shows the option of entering a remote
HMC’s information. This step applies only to a remote migration between
systems managed by different HMCs. Our example shows migration of a
partition between systems managed by a single HMC. See 5.4, “Remote
Live Partition Mobility” on page 130 for more details on remote migration.

5. Check for errors or warnings in the Partition Validation Errors/Warnings


window, and eliminate any errors.
If any errors occur, check the messages in the window and the prerequisites
for the migration. You cannot perform the migration steps with any errors.

Chapter 4. Basic partition migration scenario 101


For example, if you are proceeding with the validation steps on the mobile
partition with physical adapters in the Running state (active migration), then
you get the error shown in Figure 4-9.

Figure 4-9 Partition Validation Errors

If the mobile partition is in the Not Activated state, a warning message is


reported, as shown in Figure 4-10.

Figure 4-10 Partition Validation Warnings

102 IBM PowerVM Live Partition Mobility


6. After closing the Partition Validation Errors/Warnings window, a validation
window, as shown in Figure 4-11, opens again.
If you have no errors in the previous step, you may perform the migration at
this point by clicking the Migrate button.

Figure 4-11 Validation window after validation

4.4.2 Inactive or active migration


The migration type depends on the state of the mobile partition.

If you want to perform an inactive migration, the mobile partition must be


powered off and in the Not Activated state.

If you want to perform an active migration, the mobile partition must be in the
Running state, and no physical or dedicated I/O adapters must be assigned to it.
For details about the active partition migration requirements, see 4.1.3, “Active
partition migration” on page 93.

Chapter 4. Basic partition migration scenario 103


4.4.3 Migrating a mobile partition
After completing the validation steps, migrate the mobile partition from the source
to the destination system. You can perform the migration steps by using the HMC
GUI or CLI. For more information about the CLI, see 5.7.1, “The migrlpar
command” on page 163.

In this scenario, we are going to migrate a partition named mobile from the
source system (9117-MMA-SN101F170-L10) to the destination system
(9117-MMA-SN10F6A0-L9). To migrate a mobile partition:
1. In the navigation pane, expand Systems Management  Servers, and
select the source system.
At this point, you can see that the mobile partition is on the source system, as
shown in Figure 4-12.

Figure 4-12 System environment before migrating

2. In the contents pane, select the partition to migrate to the destination system,
that is, the mobile partition.

104 IBM PowerVM Live Partition Mobility


3. Click view popup menu button and select Operations  Mobility 
Migrate, as shown in Figure 4-13, to start the Partition Migration wizard.

“view popup menu”


button

Figure 4-13 Migrate menu on the HMC

4. Check the Migration Information of the mobile partition in the Partition


Migration wizard.

Chapter 4. Basic partition migration scenario 105


If the mobile partition is powered off, Migration Type is Inactive. If the partition
is in the Running state, it is Active, as shown in Figure 4-14.

Figure 4-14 Migration Information

5. You can specify the New destination profile name in the Profile Name panel,
as shown in Figure 4-15 on page 107.

106 IBM PowerVM Live Partition Mobility


If you leave the name blank or do not specify a unique profile name, the
profile on the destination system will be overwritten.

Figure 4-15 Specifying the profile name on the destination system

Chapter 4. Basic partition migration scenario 107


6. Optionally enter the Remote HMC network address and Remote User. In our
example, we use a single HMC. See Figure 4-16. Click Next.

Figure 4-16 Optionally specifying the Remote HMC of the destination system

108 IBM PowerVM Live Partition Mobility


7. Select the destination system and click Next. See Figure 4-17.

Figure 4-17 Selecting the destination system

The HMC then validates the partition migration environment.

Chapter 4. Basic partition migration scenario 109


8. Check errors or warnings in the Partition Validation Errors/Warnings panel,
Figure 4-18, and eliminate any errors. If errors exist, you cannot proceed to
the next step. If only warnings exist, you may proceed to the next step.

Figure 4-18 Sample of Partition Validation Errors/Warnings

110 IBM PowerVM Live Partition Mobility


9. If you are performing an inactive migration, skip this step and go to step 10
on page 112.
If you are performing an active migration, select the source and the
destination mover service partitions to be used for the migration. See
Figure 4-19.

Figure 4-19 Selecting mover service partitions

In this basic scenario, one Virtual I/O Server partition is configured on the
destination system, so the wizard window shows only one mover service
partition candidate. If you have more than one Virtual I/O Server partition on
the source or on the destination system, you can select which mover server
partitions to use.

Chapter 4. Basic partition migration scenario 111


10.Select the VLAN configuration. See Figure 4-20.

Figure 4-20 Selecting the VLAN configuration

112 IBM PowerVM Live Partition Mobility


11.Select the virtual storage adapter assignment. See Figure 4-21.
In this case, one Virtual I/O Server partition is configured on each system, so
this wizard window shows one candidate only. If you have more than one
Virtual I/O Server partition on the destination system, you may choose which
Virtual I/O Server to use as the destination.

Figure 4-21 Selecting the virtual SCSI adapter

Chapter 4. Basic partition migration scenario 113


12.Select the shared processor pool from the list of shared processor pools
matching the source partition’s shared processor pool configuration. See
Figure 4-22.

Figure 4-22 Specifying the shared processor pool

Note: If there is only one shared processor pool this option might not
appear. See 5.5, “Multiple shared processor pools” on page 147 for more
information about shared processor pools and Live Partition Mobility.

13.Specify the wait time in minutes (Figure 4-23 on page 115).


The wait time value is passed to the commands that are invoked on the HMC
and perform migration-related operations on the relevant partitions using the
Remote Monitoring and Control (RMC).
For example, the command syntax of drmgr can be used to install and
configure dynamic logical partitioning (dynamic LPAR) scripts:
drmgr {-i script_name [-w minutes] [-f] | -u script_name} [-D hostname]

114 IBM PowerVM Live Partition Mobility


The wait time value is used for the argument for the -w option. If you specify 5
minutes as the wait time, as shown in Figure 4-23, the drmgr command is
executed with -w 5.

Figure 4-23 Specifying wait time

Chapter 4. Basic partition migration scenario 115


14.Check the settings that you have specified for this migration on the Summary
panel, and then click Finish to begin the migration. See Figure 4-24.

Figure 4-24 Partition Migration Summary panel

15.The Migration status and Progress is shown in the Partition Migration Status
panel, as shown in Figure 4-25.

Figure 4-25 Partition Migration Status window

116 IBM PowerVM Live Partition Mobility


16.When the Partition Migration Status window indicates that the migration is
100% complete, verify that the mobile partition is Running on the destination
system. The mobile partition is on the destination system, as shown in
Figure 4-26.

Figure 4-26 Migrated partition

17.If you keep a record of the virtual I/O configuration of the partitions, check and
record the migrating partition’s configuration in the destination system.
Although the migrating partition retains the same slot numbers as on the
source system, the server virtual adapter slot numbers can be different
between the source and destination Virtual I/O Servers. Also, the virtual
target device name might change during migration.

Chapter 4. Basic partition migration scenario 117


118 IBM PowerVM Live Partition Mobility
5

Chapter 5. Advanced topics


This chapter discusses various advanced topics relating to Live Partition Mobility.
The chapter assumes you are familiar with the information in the preceding
chapters.

This chapter contains the following topics:


򐂰 5.1, “Dual Virtual I/O Servers” on page 120
򐂰 5.2, “Multiple concurrent migrations” on page 128
򐂰 5.3, “Dual HMC considerations” on page 130
򐂰 5.4, “Remote Live Partition Mobility” on page 130
򐂰 5.5, “Multiple shared processor pools” on page 147
򐂰 5.6, “Migrating a partition with physical resources” on page 149
򐂰 5.7, “The command-line interface” on page 162
򐂰 5.8, “Migration awareness” on page 177
򐂰 5.9, “Making applications migration-aware” on page 178
򐂰 5.10, “Making kernel extension migration aware” on page 185
򐂰 5.11, “Virtual Fibre Channel” on page 187
򐂰 5.12, “Processor compatibility modes” on page 205

© Copyright IBM Corp. 2007, 2009. All rights reserved. 119


5.1 Dual Virtual I/O Servers
Multiple Virtual I/O Servers are often deployed in systems where there is a
requirement for logical partitions to continue to use their virtual resources even
during the maintenance of a Virtual I/O Server.

This discussion relates to the common practice of using more than one Virtual
I/O Server to allow for concurrent maintenance, and is not limited to only two
servers. Also, Virtual I/O Servers may be created to offload the mover services to
a dedicated partition.

Live Partition Mobility does not make any changes to the network setup on the
source and destination systems. It only checks that all virtual networks used by
the mobile partition have a corresponding shared Ethernet adapter on the
destination system. Shared Ethernet failover might or might not be configured on
either the source or the destination systems.

Important: If you are planning to use shared Ethernet adapter failover,


remember not to assign the Virtual I/O Server’s IP address on the shared
Ethernet adapter. Create another virtual Ethernet adapter and assign the IP
address on it. Partition migration requires network connectivity through the
RMC protocol to the Virtual I/O Server. The backup shared Ethernet adapter is
always offline, and its associated IP address, if any.

When multiple Virtual I/O Servers are involved, multiple virtual SCSI, and virtual
Fibre Channel combinations are possible. Access to the same storage area
network (SAN) disk may be provided on the destination system by multiple
Virtual I/O Servers for use with virtual SCSI mapping. Similarly, multiple Virtual
I/O Servers can provide access with multiple paths to a specific set of assigned
LUNs for virtual Fibre Channel usage. Live Partition Mobility automatically
manages the virtual SCSI and virtual Fibre Channel configuration if an
administrator does not provide specific mappings.

The partition that is moving must keep the same number of virtual SCSI and
virtual Fibre Channel adapters after migration and each virtual disk must remain
connected to the same adapter or adapter set. An adapter’s slot number can
change after migration, but the same device name is kept by the operating
system for both adapters and disks.

A migration can fail validation checks and is not started if the moving partition
adapter and disk configuration cannot be preserved on the destination system.
In this case, you are required to modify the partition configuration before starting
the migration.

120 IBM PowerVM Live Partition Mobility


Tip: The best practice is to always perform a validation before performing a
migration. The validation checks the configuration of the involved Virtual I/O
Servers and shows you the configuration that will be applied. Use the
validation menu on the GUI or the lslparmigr command described in 5.7.2,
“The lslparmigr command” on page 166.

In this section, we describe three different migration scenarios where the source
and destination systems provide disk access either with one or two Virtual I/O
Servers using virtual SCSI adapters. More information about virtual Fibre
Channel adapters can be found in 5.11, “Virtual Fibre Channel” on page 187.

5.1.1 Dual Virtual I/O Server and client mirroring


Dual Virtual I/O Server and client mirroring may be used when you have two
independent storage subsystems providing disk space with data mirrored across
them. It is not required that your mirroring use two independent storage
subsystems, but it is recommended. With this setup, the partition can continue to
run if one of the subsystems is taken offline.

Chapter 5. Advanced topics 121


If the destination system has two Virtual I/O Servers, one of them should be
configured to access the disk space provided by the first storage subsystem; the
other must access the second subsystem, as shown in Figure 5-1.

AIX Client Partition 1

LVM Mirroring

hdisk0 hdisk1

vscsi0 vscsi1

Hypervisor Hypervisor

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vhost0 vhost0
Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vtscsi0 vtscsi0

hdisk0 hdisk0 hdisk0 hdisk0

Storage Storage Storage Storage


adapter adapter adapter adapter

Disk A Disk B

Storage Storage
Subsystem Subsystem

Figure 5-1 Dual VIOS and client mirroring to dual VIOS before migration

The migration process automatically detects which Virtual I/O Server has access
to which storage and configures the virtual devices to keep the same disk access
topology.

122 IBM PowerVM Live Partition Mobility


When migration is complete, the logical partition has the same disk configuration
it had on previous system, still using two Virtual I/O Servers, as shown in
Figure 5-2.

AIX Client Partition 1

LVM Mirroring

hdisk1 hdisk0

vscsi1 vscsi0

Hypervisor Hypervisor

vhost0 vhost0

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vtscsi0 vtscsi0

hdisk0 hdisk0 hdisk0 hdisk0

Storage Storage Storage Storage


adapter adapter adapter adapter

Disk A Disk B

Storage Storage
Subsystem Subsystem

Figure 5-2 Dual VIOS and client mirroring to dual VIOS after migration

Chapter 5. Advanced topics 123


If the destination system has only one Virtual I/O Server, the migration is still
possible and the same virtual SCSI setup is preserved at the client side. The
destination Virtual I/O Server must have access to all disk spaces and the
process creates two virtual SCSI adapters on the same Virtual I/O Server, as
shown in Figure 5-3.

AIX Client Partition 1

LVM Mirroring

hdisk1 hdisk0

vscsi1 vscsi0

Hypervisor Hypervisor

Virtual I/O Server (VIOS) 1


Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vhost0 vhost1

vtscsi0 vtscsi1

hdisk0 hdisk0 hdisk0 hdisk1

Storage Storage Storage


adapter adapter adapter

Disk A Disk B

Storage Storage
Subsystem Subsystem

Figure 5-3 Dual VIOS and client mirroring to single VIOS after migration

5.1.2 Dual Virtual I/O Server and multipath I/O


With multipath I/O, the logical partition accesses the same disk data using two
different paths, each provided by a separate Virtual I/O Server. One path is active
and the other is standby.

124 IBM PowerVM Live Partition Mobility


The migration is possible only if the destination system is configured with two
Virtual I/O Servers that can provide the same multipath setup. They both must
have access to the shared disk data, as shown in Figure 5-4.

AIX Client Partition 1

hdisk0

vscsi0 vscsi1

Hypervisor Hypervisor

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vhost0 vhost0
Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vtscsi0 vtscsi0

hdisk0 hdisk0 hdisk0 hdisk0

Storage Storage Storage Storage


adapter adapter adapter adapter

Disk A

Storage
Subsystem

Figure 5-4 Dual VIOS and client multipath I/O to dual VIOS before migration

Chapter 5. Advanced topics 125


When migration is complete, on the destination system, the two Virtual I/O
Servers are configured to provide the two paths to the data, as shown in
Figure 5-5.

AIX Client Partition 1

hdisk0

vscsi0 vscsi1

Hypervisor Hypervisor

vhost0 vhost0

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vtscsi0 vtscsi0

hdisk0 hdisk0 hdisk0 hdisk0

Storage Storage Storage Storage


adapter adapter adapter adapter

Disk A

Storage
Subsystem

Figure 5-5 Dual VIOS and client multipath I/O to dual VIOS after migration

If the destination system is configured with only one Virtual I/O Server, the
migration cannot be performed. The migration process would create two paths
using the same Virtual I/O Server, but this setup is not allowed, because having
two virtual target devices that map the same backing device on different virtual
SCSI server devices is not possible.

To migrate the partition, you must first remove one path from the source
configuration before starting the migration. The removal can be performed
without interfering with the running applications. The configuration becomes a
simple single Virtual I/O Server migration.

5.1.3 Single to dual Virtual I/O Server


A logical partition that is using only one Virtual I/O Server for virtual disks may be
migrated to a system where multiple Virtual I/O Servers are available.

Because the migration never changes a partition’s configuration, only one Virtual
I/O Server is used on the destination system.

126 IBM PowerVM Live Partition Mobility


If access to all disk data required by the partition is provided by only one Virtual
I/O Server on the destination system, after migration the partition will use just
that Virtual I/O Server. If no destination Virtual I/O Server provides all disk data,
the migration cannot be performed.

When both destination Virtual I/O Servers have access to all the disk data, the
migration can select either one or the other. When you start the migration, you
have the option of choosing a specific Virtual I/O Server. The HMC automatically
makes a selection if you do not specify the server. The situation is shown in
Figure 5-6.

AIX Client Partition 1

hdisk0

vscsi0

Hypervisor Hypervisor

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vhost0
Virtual I/O Server (VIOS) 1

vtscsi0

hdisk0 hdisk0 hdisk0

Storage Storage Storage


adapter adapter adapter

Disk A

Storage
Subsystem

Figure 5-6 Single VIOS to dual VIOS before migration

When the migration is performed using the GUI on the HMC, a list of possible
Virtual I/O Servers to pick from is provided. By default, the command-line
interface makes the automatic selection if no specific option is provided.

Chapter 5. Advanced topics 127


After migration, the configuration is similar to the one shown in Figure 5-7.

AIX Client Partition 1

hdisk0

vscsi0

Hypervisor Hypervisor

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vhost0
Virtual I/O Server (VIOS) 1

vtscsi0

hdisk0 hdisk0 hdisk0

Storage Storage Storage


adapter adapter adapter

Disk A

Storage
Subsystem

Figure 5-7 Single VIOS to dual VIOS after migration

5.2 Multiple concurrent migrations


The same system can handle multiple concurrent partition migrations, any mix of
either inactive or active.

In many scenarios, more than one migration may be started on the same system.
For example:
򐂰 A review of the entire infrastructure detects that a different system location of
some logical partition may improve global system usage and service quality.
򐂰 A system is planned to enter maintenance and must be shut down. Some of
its partitions cannot be stopped or the planned maintenance time is too long
to satisfy service level agreements.

The maximum number of concurrent migrations on a system can be identified by


using the lslparmigr command on the HMC, with the following syntax:
lslparmigr -r sys -m <system>

128 IBM PowerVM Live Partition Mobility


Several practical considerations should be taken into account when planning for
multiple migrations, especially when the time required by the migration process
has to be evaluated.

For each mobile partition, you must use an HMC GUI wizard or an HMC
command. While a migration is in progress, you can start another one. When the
number of migrations to be executed grows, the setup time using the GUI can
become long and you should consider using the CLI instead. The migrlpar
command may be used in scripts to start multiple migrations in parallel.

An active migration requires more time to complete than an inactive migration


because the system performs additional activities to keep applications running
while the migration is in progress. Consider the following information:
򐂰 The time required to complete an active migration depends on the size of the
memory to be migrated and on the mobile partition’s workload.
򐂰 The Virtual I/O Servers selected as mover service partitions are loaded by
memory moves and network data transfer, as follows:
– High speed network transfers can become processor-intensive workloads.
– At most, four concurrent active migrations can be managed by the same
mover service partition.

The active migration process has been designed to handle any partition memory
size and it is capable of managing any memory workload. Applications can
update memory with no restriction during migration and all memory changes are
taken into account, so elapsed migration time can change with workload.
Although the algorithm is efficient, planning the migration during low activity
periods can help to reduce migration time.

Virtual I/O Servers selected as mover service partitions are involved in partition’s
memory migration and must manage high network traffic. Network management
can cause high CPU usage and usual performance considerations apply; use
uncapped Virtual I/O Servers and add virtual processors if the load increases.
Alternatively, create dedicated Virtual I/O Servers on the source and destination
systems that provide the mover service function separating the service network
traffic from the migration network traffic. You can combine or separate
virtualization functions and mover service functions to suit your requirements.

If multiple mover service partitions are available on either the source or


destination systems, we suggest distributing the load among them. This process
can be done explicitly by selecting the mover service partitions, either by using
the GUI or the CLI. Each mover service partition can manage up to four
concurrent active migrations and explicitly using multiple Virtual I/O Servers
avoids queuing of requests.

Chapter 5. Advanced topics 129


5.3 Dual HMC considerations
The HMC is the center of system management of IBM Power Systems and its
unavailability does not affect service by any means. However, deploying two
HMCs managing the same systems is possible.

In a dual HMC configuration, both HMCs see the same system’s status, have the
same configuration rights, and can perform the same actions. To avoid
concurrent operations on the same system, a locking mechanism is in place that
allows the first configuration change to occur and the second one to fail with a
message showing the identifier of the locking HMC.

Live Partition Mobility is a configuration change that involves two separate


systems. Also, the migration process requires that no additional modifications
occur on the involved objects until migration is completed.

The HMC that initiates a migration takes a lock on both managed systems and
the lock is released when migration is completed. The other HMC can show the
status of migration but cannot issue any additional configuration changes on the
two systems. Although the lock can be manually broken, carefully consider this
option.

Consider the locking mechanism when planning migration. Most additional


system management actions should be performed using the same HMC that is
performing the migration. The other HMC can continue to be used for monitoring
purposes, but not for configuration changes until migration is completed.

When multiple migrations are planned between two systems, multiple HMC
commands are issued. The first migration task takes an HMC lock on both
systems so the subsequent migration must be issued on the same HMC. After
that, only one HMC is required to be used when multiple concurrent migrations
are executed.

5.4 Remote Live Partition Mobility


This section focuses on Live Partition Mobility and its ability to migrate a logical
partition between two IBM Power Systems servers each managed by a separate
Hardware Management Console. Remote migrations require coordinated
movement of a partition’s state and resources over a secure network channel to
a remote HMC.

130 IBM PowerVM Live Partition Mobility


The following list indicates the high-level prerequisites for remote migration. If
any of the following elements are missing, a migration cannot occur:
򐂰 A ready source system that is migration-capable
򐂰 A ready destination system that is migration-capable
򐂰 Compatibility between the source and destination systems
򐂰 Destination system managed by a remote HMC
򐂰 Network communication between local and remote HMC
򐂰 A migratable, ready partition to be moved from the source system to the
destination system. For an inactive migration, the partition must be turn off,
but must be capable of booting on the destination system.
򐂰 For active migrations, an MSP on the source and destination systems
򐂰 One or more SANs that provide connectivity to all of the mobile partition’s
disks to the Virtual I/O Server partitions on both the source and destination
servers. The mobile partition accesses all migratable disks through devices
(virtual Fibre Channel, virtual SCSI, or both). The LUNs used for virtual SCSI
must be zoned and masked to the Virtual I/O Servers on both systems. Virtual
Fibre Channel LUNs should be configured as described in Chapter 2 of
PowerVM Virtualization on IBM System p: Managing and Monitoring,
SG24-7590. Hardware-based iSCSI connectivity may be used in addition to
SAN. SCSI reservation must be disabled.
򐂰 The mobile partition’s virtual disks must be mapped to LUNs; they cannot be
part of a storage pool or logical volume on the Virtual I/O Server.
򐂰 One or more physical IP networks (LAN) that provide the necessary network
connectivity for the mobile partition through the Virtual I/O Server partitions
on both the source and destination servers. The mobile partition accesses all
migratable network interfaces through virtual Ethernet devices.
򐂰 An RMC connection to manage inter-system communication

Remote migration operations require that each HMC has RMC connections to its
individual system’s Virtual I/O Servers and a connection to its system’s service
processors. The HMC does not have to be connected to the remote system’s
RMC connections to its Virtual I/O Servers nor does it have to connect to the
remote system’s service processor.

The remote active and inactive migrations follow the same workflow as described
in Chapter 2, “Live Partition Mobility mechanisms” on page 19. The local HMC,
which manages the source server in a remote migration, serves as the
controlling HMC. The remote HMC, which manages the destination server,
receives requests from the local HMC and sends responses over a secure
network channel.

Chapter 5. Advanced topics 131


5.4.1 Requirements for remote migration
The Remote Live Partition Mobility feature is available starting with HMC Version
7 Release 3.4. This feature allows a user to migrate a client partition to a
destination server that is managed by a different HMC. The function relies on
Secure Shell (SSH) to communicate with the remote HMC.

The following list indicates the requirements for remote HMC migrations:
򐂰 A local HMC managing the source server
򐂰 A remote HMC managing the destination server
򐂰 Version 7 Release 3.4 or later HMC version
򐂰 Network access to a remote HMC
򐂰 SSH key authentication to the remote HMC

The source and destination servers, mover service partitions, and Virtual I/O
Servers are required to be configured exactly as though they were going to be
performing migrations managed by a single HMC in the basic scenario as
described in Chapter 3, “Requirements and preparation” on page 45.

To initiate the remote migration operation, you may use only the HMC that
contains the mobile partition.

A recommendation is to have some IBM Power Systems servers use private


networks to access the HMC. The ability to migrate a partition remotely allows
Live Partition Mobility between systems managed by HMCs that are also using
separate private networks.

132 IBM PowerVM Live Partition Mobility


Figure 5-8 displays the Live Partition Mobility infrastructure involving the two
remote HMCs and their respective managed systems.

POWER6 System #1 POWER6 System #2


(Source system) (Destination system)
AIX Client Partition 1
(Mobile partition)

hdisk0

vscsi0 ent0

Processor

Processor
Service

Service
VLAN POWER POWER VLAN
Hypervisor Hypervisor

vhost0 ent1 ent1 vhost0


Virtual I/O Server

Virtual I/O Server


virt virt
vtscsi0 vtscsi0
ent2 en2 en2 ent2
SEA if if SEA
hdisk0 hdisk0

fcs0 ent0 ent0 fcs0


Local Remote
HMC HMC

Ethernet Network

Storage Area Network

Storage
Subsystem LUN

Figure 5-8 Live Partition Mobility infrastructure with two HMCs

Chapter 5. Advanced topics 133


Figure 5-9 displays the infrastructure involving private networks that link each
service processor to its HMC. The HMC for both systems contains a second
network interface that is connected to the public network.

POWER6 System #1 POWER6 System #2


(Source system) (Destination system)
AIX Client Partition 1
(Mobile partition)

hdisk0

vscsi0 ent0

Processor

Processor
Service

Service
VLAN POWER POWER VLAN
Hypervisor Hypervisor

vhost0 ent1 Ethernet Ethernet ent1 vhost0


Virtual I/O Server

Virtual I/O Server


virt private network private network virt
vtscsi0 vtscsi0
ent2 en2 en2 ent2
SEA if if SEA
hdisk0 hdisk0

fcs0 ent0 ent0 fcs0


Local Remote
HMC HMC

Ethernet Network

Storage Area Network

Storage
Subsystem LUN

Figure 5-9 Live Partition Mobility infrastructure using private networks

134 IBM PowerVM Live Partition Mobility


Figure 5-10 shows the situation where one POWER System is in communication
with the HMC on a private network, and the destination sever is communicating
by using the public network.

POWER6 System #1 POWER6 System #2


(Source system) (Destination system)
AIX Client Partition 1
(Mobile partition)

hdisk0

vscsi0 ent0

Processor

Processor
Service

Service
VLAN POWER POWER VLAN
Hypervisor Hypervisor

vhost0 ent1 Ethernet ent1 vhost0


Virtual I/O Server

Virtual I/O Server


virt private network virt
vtscsi0 vtscsi0
ent2 en2 en2 ent2
SEA if if SEA
hdisk0 hdisk0

fcs0 ent0 ent0 fcs0


Local Remote
HMC HMC

Ethernet Network

Storage Area Network

Storage
Subsystem LUN

Figure 5-10 One public and one private network migration infrastructure

5.4.2 HMC considerations


Preparation for remote migration involves the same steps as explained in 2.5.2,
“Preparation” on page 32. To prepare for mobility, two additional steps are
necessary:
1. Configure network communication between the HMCs.
2. Authenticate the local HMC with the destination’s HMC.

The steps to configure Virtual I/O Servers, client partition, mover service
partitions and partition profiles do not change.

Use dedicated networks with 1 Gbps bandwidth, or more. This applies for each
involved HMC, Virtual I/O Server, and mover service partition.

Remote migration capability


Confirm that each HMC is capable to perform remote migrations, which requires
access to the CLI on each HMC. To determine whether the HMC is capable of
remote migration, use the lslparmigr -manager command. If the HMC is

Chapter 5. Advanced topics 135


capable, the attribute remote_lpar_mobility_capable displays a value of 1; if the
HMC is incapable, the attribute indicates a value of 0.

HMC network communication


Test network communication existence between the two HMC systems involved
in the migration. To test the network, use the HMC as follows:
1. In the navigation area, select HMC Management.
2. In the Operations section of the contents area, select Test Network
Connectivity.
3. In the Network Diagnostic Information window, select the Ping tab.
4. In the text box, enter the IP address or host name of the remote HMC, and
click Ping.
5. Review the results to ensure that certain packets were not lost, as shown in
Figure 5-11:

Figure 5-11 Network ping successful to remote HMC

SSH authentication keys


Allow communication between the local and remote HMC through SSH key
authentication. The local user must retrieve authentication keys from the user on
the remote HMC. This retrieval requires access to the CLI on the local HMC. To
allow the local HMC to communicate with remote HMC, first ensure that remote
command execution is enabled on the remote HMC.

136 IBM PowerVM Live Partition Mobility


To enable remote command execution (see Figure 5-12):
1. In the navigation area, select HMC Management.
2. In the Administration section of the contents area, select Remote Command
Execution.

Figure 5-12 HMC option for remote command execution

3. In the Remote Command Execution window, enable the check box to Enable
remote command execution using the ssh facility, as shown in
Figure 5-13. Click OK.

Figure 5-13 Remote command execution window

Chapter 5. Advanced topics 137


Use the mkauthkeys command in the CLI to retrieve authentication keys from the
current HMC managing the mobile partition. You must be logged in as a user with
hmcsuperadmin privileges, such as the hscroot user, and authenticate to the
remote HMC by using a remote user ID with hmcsuperadmin privileges.
Authentication to a remote system (in our case, 9.3.5.180) using RSA
authentication is displayed in Example 5-1. For details about the mkauthkeys
command, see 5.7.4, “The mkauthkeys command” on page 173.

Example 5-1 mkauthkeys command execution


hscroot@hmc1:~> mkauthkeys --ip 9.3.5.180 -u hscroot -t rsa
Enter the password for user hscroot on the remote host 9.3.5.180:

5.4.3 Remote validation and migration


Partition migration to a remote destination includes a function available starting in
HMC Version 7 Release 3.4. The function informs the local HMC to contact the
remote HMC and requests a list of all migration-ready IBM Power Systems
servers. This function was added to the validation and migration steps for the
HMC GUI and CLI.

Validation steps for remote migration


You may validate migration to an authenticated remote HMC by using the HMC
GUI or CLI. The following steps use the GUI. If you want more information about
the CLI, see 5.7.1, “The migrlpar command” on page 163.

To validate by using the GUI:


1. In the navigation pane, expand Systems Management  Servers, and
select the source system.
2. In the contents pane (the top right of the Hardware Management Console
Workplace), select the partition which you will migrate to the destination
system.
3. Click view popup menu and select Operations  Mobility  Validate to
start the validation window.

138 IBM PowerVM Live Partition Mobility


4. Enter the Remote HMC IP address or host name and the Remote User ID
information, which was used for authentication, and then click the Refresh
Destination System button. See Figure 5-14.
All migration-ready systems managed by the remote HMC are listed. If your
local HMC manages any other migration-ready systems, you will see a list of
those, in the Destination system listing, prior to the refresh.

Figure 5-14 Remote migration information entered for validate task

If the destination systems refresh properly, continue to step 5 on page 140.


If you encounter an error, check the following items:
a. SSH authentication was configured properly.
b. Network communication to the remote HMC is available.
c. Correct remote HMC and User ID entered correctly.
d. Migration-ready systems exist on the remote HMC.

Chapter 5. Advanced topics 139


5. Select the remote Destination system. You have the option of also specifying
the Destination profile name and Wait time. Click the Validate button
(Figure 5-15).
If you are proceeding with this step when the mobile partition is in the Not
Activated state, the destination and source mover service partition and wait
time entries do not appear, because these are not required for the inactive
partition migration.

Figure 5-15 Validation window after destination system refresh

6. If errors or warnings occur, the Partition Validation Errors/Warnings window


opens. Perform the following steps:

Note: If the window does not appear, you have no errors or warnings.

a. Check the messages in the window and the prerequisites for the migration:
• For error messages: You cannot perform the migration steps if errors
exist. Eliminate any errors.
• For warning messages: If only warnings occur (no errors), you may
migrate the partition after the validation steps.

140 IBM PowerVM Live Partition Mobility


b. Close the Partition Validation Errors/Warnings window. A validation
window opens again, as shown in Figure 5-16. If you had warning
messages only (no error messages), you may click the Migrate button.

Figure 5-16 Validation window after validation

Migration steps for remote migration


After the validation steps, migrate the mobile partition from the source to the
destination system managed by the remote HMC. You can perform the migration
steps by using the HMC GUI or CLI. For information about the CLI, see 5.7, “The
command-line interface” on page 162.

In this scenario, we migrate a partition named mobile from the source system
(9117-MMA-SN100F6A0-L9) managed by the local HMC (9.3.5.128) to the
destination system (9117-MMA-SN101F170-L10) on the remote HMC
(9.3.5.180), as follows:
1. In the navigation pane on the local HMC, expand Systems Management 
Servers, and select the source system.

Chapter 5. Advanced topics 141


At this point, you can see the mobile partition is on the source system
managed by the local HMC in Figure 5-17 and that only the source system is
available on the local HMC for this scenario.

Figure 5-17 Local HMC environment before migrating

2. In the contents pane, select the partition that you will migrate to the
destination system, that is, the mobile partition.
3. Click view popup menu and select Operations  Mobility  Migrate to
start the Partition Migration wizard.
4. Check the Migration Information of the mobile partition in the Partition
Migration wizard.
If the mobile partition is powered off, the Migration Type is inactive. If the
partition is in the Running state, the Migration Type is active.
You can specify the New destination profile name in the Profile Name
window.
If you leave the name blank or do not specify a unique profile name, the
profile on the destination system will be overwritten.

142 IBM PowerVM Live Partition Mobility


5. Select Remote Migration, enter the Remote HMC and Remote User
information, as shown in Figure 5-18, and then click Next.

Figure 5-18 Remote HMC selection window in Migrate task

6. Select the destination system and click Next. The HMC validates the partition
migration environment.
7. Check errors or warnings in the Partition Validation Errors/Warnings window,
and eliminate any errors. If there are any errors, you cannot proceed to the
next step. You may proceed to the next step if it shows warnings only.
8. If you are performing inactive migration, skip this step and go to 9.
If you are performing active migration, select the source and the destination
mover service partitions to be used for the migration.
9. Select the VLAN configuration.
10.Select the virtual storage adapter assignment.
11.Specify the wait time in minutes.

Chapter 5. Advanced topics 143


12.Check the settings that you have specified for this migration on the Summary
window, and then click Finish to begin the migration. See Figure 5-19.

Figure 5-19 Remote migration summary window

13.After migration is complete, check that the mobile partition is on the


destination system on the remote HMC.

144 IBM PowerVM Live Partition Mobility


You can see the mobile partition is on the destination system, as shown in
Figure 5-20.

Figure 5-20 Remote HMC view after remote migration success

14.If you keep a record of the virtual I/O configuration of the partitions, check the
migrating partition’s configuration in the destination system. Although the
migrating partitions retain the same slot numbers as on the source systems,
the server virtual adapter slot numbers can be different between the source
and destination Virtual I/O Servers. Also, the virtual target device name can
change during migration.

5.4.4 Command-line interface enhancements


The lslparmigr and migrlpar commands have been enhanced to request
information to a remote HMC. The commands now include the --ip and the -u
flags, which allow you to indicate the destination system.

The next two examples show how the --ip and --u flags are used with commands
lslparmigr (in Example 5-2 on page 146) and migrlpar (in Example 5-3 on
page 146).

Chapter 5. Advanced topics 145


Example 5-2 The lslparmigr command with remote options
lslparmigr -r msp --ip 9.3.5.180 -u hscroot -m 9117-MMA-SN100F6A0-L9 \
-t 9117-MMA-SN101F170-L10 --filter lpar_names=PROD

source_msp_name=VIOS1_L9,source_msp_id=1,dest_msp_names=VIOS1_L10,
dest_msp_ids=1,ipaddr_mappings=9.3.5.3//1/VIOS1_L10/9.3.5.111/

Example 5-3 The migrlpar command with remote options


hscroot@hmc1:~> migrlpar -o v --ip 9.3.5.180 -u hscroot -m
9117-MMA-SN100F6A0-L9 -t 9117-MMA-SN101F170-L10 -p mobile

Warnings:
HSCLA295 As part of the migration process, the HMC will create a new
migration profile containing the partition's current state. The
default is to use the current profile, which will replace the existing
definition of this profile. While this works for most scenarios, other
options are possible. You may specify a different existing profile,
which would be replaced with the current partition definition, or you
may specify a new profile to save the current partition state.

146 IBM PowerVM Live Partition Mobility


5.5 Multiple shared processor pools
IBM Power Systems servers with AIX support multiple shared processor pools.
The use of shared processor pools on a given system allows for a specified
number of processors to be reserved for a specific number of managed systems.
Shared processor pools do not have to be configured differently from how it is
described in PowerVM Virtualization on IBM System p: Managing and
Monitoring, SG24-7590. However, additional steps are included in the HMC GUI
and CLI procedures.

5.5.1 Shared processor pools in migration and validation GUI


The migration wizard presents you with a list of all defined shared processor
pools on the destination that have sufficient capacity to receive the migrating
partition. You are asked to identify the target pool using the Migrate task as
shown in Figure 5-21. The name and identifier of the shared processor pool on
the destination do not have to be the same as those on the source.

Figure 5-21 Shared processor pool selection in migration wizard

If you use the CLI, the migration operation will fail if the arrival of the migrating
partition would cause the maximum processors in the chosen shared pool on the
destination to be exceeded.

Chapter 5. Advanced topics 147


The ability to select a specific shared processor pool is also presented during the
Validate task after an error-free validation has occurred as shown in Figure 5-22.

Figure 5-22 Shared processor pool selection in Validate task

If the migration is initiated after a change has occurred on the destination system
where the selected processor pool can no longer accommodate the client
partition, the migration will fail.

5.5.2 Processor pools on command line


The CLI command changes to accommodate processor pools for partition
mobility include both the lslparmigr and the migrlpar commands. Examples of
the changes made are described in 5.7, “The command-line interface” on
page 162.

148 IBM PowerVM Live Partition Mobility


5.6 Migrating a partition with physical resources
This section explains how to migrate a partition that is currently using physical
resources.

5.6.1 Overview
Three types of adapters cannot be present in a partition when it is participating in
an active migration are physical adapters, Integrated Virtual Ethernet adapters,
and non-default virtual serial adapters. A non-default virtual serial adapter is a
virtual serial adapter other than the two automatically created virtual serial
adapters in slots 0 and 1. If a partition has non-default virtual serial adapters, you
must deconfigure them; you might have to switch from physical to virtual
resources. For this scenario, we assume you are beginning with a mobile
partition that uses a single physical Ethernet adapter and a single physical SCSI
adapter. See Figure 5-23.

Source System Destination System

Mobile Partition
rootvg
hdisk0 hdisk1

ent0 sisioa0

Hypervisor

SCSI
Enclosure

Figure 5-23 The mobile partition is using physical resources

Chapter 5. Advanced topics 149


If the mobile partition has any adapters that cannot be migrated, then they must
be removed from the mobile partition before it can participate in an active
migration. If these adapters are marked as desired in the active profile, remove
them using dynamic logical partitioning. If these adapters are marked as required
in the active profile, activate the partition with a profile that does not have them
marked as required.

The process described in this section covers both the case where the mobile
partition does not have such required adapters, and the case where it does.

Before proceeding, verify that the requirements for Live Partition Mobility are met,
as outlined in Chapter 3, “Requirements and preparation” on page 45. However,
in that chapter, ignore the requirement to check that the adapters cannot be
migrated because this exception is discussed in all of 5.6, “Migrating a partition
with physical resources” on page 149.

5.6.2 Configure a Virtual I/O Server on the source system


Create and install a Virtual I/O Server partition on the source system. When
creating and configuring the partition, see the following procedure. For detailed
instructions, see the basic configuration information in 4.3, “Preparing for an
active partition migration” on page 94.

To configure a Virtual I/O Server on the source system:


1. Configure a virtual SCSI server adapter.

Important: Mark the virtual SCSI server adapter as desired (not required)
in your Virtual I/O Server partition profile. This setting is necessary to allow
the migration process to dynamically remove this adapter during a
migration.

When creating the virtual SCSI server adapter, use the “Only selected
client partition can connect” option. For the Client partition field, specify the
mobile partition. For the Client adapter field, specify an unused virtual slot
on the mobile partition. Do not set the server adapter to accept
connections from any partition. This method allows the migration process
to identify which server adapter is paired with which client partition.

2. Attach and configure the remote storage using a storage area network:
– Create one LUN on your storage subsystem for each disk in use on your
mobile partition. Ensure that these LUNs are at least as large as the disks
on your mobile partition. Make these LUNs available as hdisks on the
source Virtual I/O Server.

150 IBM PowerVM Live Partition Mobility


– On the source Virtual I/O Server, set the reserve_policy on the disks to
no_reserve by using the chdev command:
$ chdev -dev hdisk5 -attr reserve_policy=no_reserve
– Assign the hdisks as targets of the virtual SCSI server adapter that you
created, using the mkvdev command. Do not create volume groups and
logical volumes on the hdisks within the Virtual I/O Server.
3. Configure shared Ethernet adapters for each physical network interface that
is configured on the mobile partition.
4. Ensure the Mover service partition box is checked in the Virtual I/O Server
partition properties.

Figure 5-24 shows the created and configured source Virtual I/O Server.

Source System Destination System

Mobile Partition
rootvg
hdisk0 hdisk1

ent0 sisioa0

Hypervisor

vhost0 ent1
Source VIOS

hdisk5 hdisk6

Storage Ethernet
adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-24 The source Virtual I/O Server is created and configured

Chapter 5. Advanced topics 151


5.6.3 Configure a Virtual I/O Server on the destination system
Create and install a Virtual I/O Server partition on the destination system. When
creating and configuring the partition, see the following procedure. For detailed
instructions, see the basic configuration information in 4.3, “Preparing for an
active partition migration” on page 94.

Important: Do not create any virtual SCSI server adapters for your mobile
partition on the destination Virtual I/O Server. Do not map any shared hdisks
on the destination Virtual I/O Server. All of this is done automatically for you
during the migration.

To configure a Virtual I/O Server on the destination system:


1. Use standard SAN configuration techniques to attach the same remote
storage that you attached to the source Virtual I/O Server.
The same remote LUNs must be available as hdisks on both the source and
destination Virtual I/O Servers.
2. Configure shared Ethernet adapters for each physical network that is
configured on the mobile partition, just as you did on the source Virtual I/O
Server.
3. Ensure the Mover service partition box is checked in the Virtual I/O Server
partition properties.

152 IBM PowerVM Live Partition Mobility


Figure 5-25 shows the created and configured destination Virtual I/O Server. In
the figure, the hdisk numbers on the destination Virtual I/O Server differ from
those on the source Virtual I/O Server. The hdisk numbers may be different, but
they are the same LUNs on the storage subsystem.

Source System Destination System

Mobile Partition
rootvg
hdisk0 hdisk1

ent0 sisioa0

Hypervisor Hypervisor

vhost0 ent1 ent1

Destination VIOS
Source VIOS

hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-25 The destination Virtual I/O Server is created and configured

5.6.4 Configure storage on the mobile partition


To switch over to using virtual storage devices on your mobile partition:
1. Add a virtual SCSI client adapter to the profile of the mobile partition. Ensure
the virtual SCSI client adapter refers to the server adapter you created on the
source Virtual I/O Server.
2. Use dynamic logical partitioning to add a virtual SCSI client adapter with the
same properties from the previous step to the running mobile partition.

Chapter 5. Advanced topics 153


3. Configure the virtual SCSI devices on the mobile partition, as follows:
a. Run the cfgmgr command on the mobile partition.
b. Verify that the virtual SCSI adapters are in the Available state by using the
lsdev command:
# lsdev -t IBM,v-scsi
c. Verify that the virtual SCSI disks are in the Available state by using the
lsdev command:
# lsdev -t vdisk

Figure 5-26 shows the configured storage devices on the mobile partition.

Source System Destination System

Mobile Partition
rootvg
hdisk0 hdisk1 hdisk7 hdisk8

ent0 sisioa0 vscsi0

Hypervisor Hypervisor

vhost0 ent1 ent1

Destination VIOS
Source VIOS

hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-26 The storage devices are configured on the mobile partition

154 IBM PowerVM Live Partition Mobility


4. On the mobile partition, move rootvg from physical disks to virtual disks. For
example, assume hdisk0 and hdisk1 are the physical disks in rootvg, and that
hdisk7 and hdisk8 are the virtual disks you created whose sizes are at least
as large as hdisk0 and hdisk1. Move rootvg as follows:
a. Extend rootvg on to virtual disks using the extendvg command.
# extendvg rootvg hdisk7 hdisk8
If the extendvg command fails, there is another approach. Depending on
the size of the disks, you might have to change the factor of the volume
group, by using the chvg command, to extend to the new disks (do not use
the chvg command unless the extendvg command fails.)
# chvg -t 10 rootvg
Figure 5-27 shows rootvg extended on to the virtual disks.

Mobile Partition

Source System Destination System

Mobile Partition
rootvg

hdisk0 hdisk1 hdisk7 hdisk8

ent0 sisioa0 vscsi0

Hypervisor Hypervisor

vhost0 ent1 ent1


Destination VIOS
Source VIOS

hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-27 The root volume group extends on to virtual disks

Chapter 5. Advanced topics 155


b. Migrate physical partitions off the physical disks in rootvg on to the virtual
disks in rootvg using the migratepv command:
# migratepv hdisk0 hdisk7
# migratepv hdisk1 hdisk8
c. Set the bootlist to a virtual disk in rootvg using the bootlist command:
# bootlist -m normal hdisk7 hdisk8
d. Run the bosboot command on a virtual disk in rootvg:
# bosboot -ad /dev/hdisk7
e. Remove physical disks from rootvg using the reducevg command:
# reducevg rootvg hdisk0 hdisk1
5. Repeat the previous step (excluding the bootlist command and the bosboot
command) for all other volume groups on the mobile partition.

Figure 5-28 shows rootvg on the mobile partition now wholly on the virtual disks.

Source System Destination System

Mobile Partition rootvg

hdisk0 hdisk1 hdisk7 hdisk8

ent0 sisioa0 vscsi0

Hypervisor Hypervisor

vhost0 ent1 ent1


Destination VIOS
Source VIOS

hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-28 The root volume group of the mobile partition is on virtual disks only

156 IBM PowerVM Live Partition Mobility


5.6.5 Configure network on the mobile partition
To configure the virtual network devices on the mobile partition:
1. Add virtual Ethernet adapters to the profile of your mobile partition:
– For each physical network on the mobile partition, add one virtual adapter.
– Ensure the virtual Ethernet adapters use the shared Ethernet adapters
you created on the source Virtual I/O Server.
2. Use dynamic logical partitioning to add virtual Ethernet adapters with the
same properties from the previous step to the running mobile partition.
3. Configure the virtual Ethernet devices on the mobile partition:
a. Run the cfgmgr command on the mobile partition to make the devices
available.
b. Verify that the virtual Ethernet adapters are in the Available state using the
lsdev command:
# lsdev -t IBM,l-la

Chapter 5. Advanced topics 157


Figure 5-29 shows the mobile partition with a virtual network device created.

Source System Destination System

Mobile Partition rootvg

hdisk0 hdisk1 hdisk7 hdisk8

ent0 sisioa0 vscsi0 ent1

Hypervisor Hypervisor

vhost0 ent1 ent1

Destination VIOS
Source VIOS
hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-29 The mobile partition has a virtual network device created

Now that the virtual network adapters are configured, stop using the physical
network adapters and begin using the virtual network adapters. To move to
virtual networks on the mobile partition, use new or existing IP addresses. Both
procedures, discussed in this section, affect network connectivity differently.
Understand how all running applications use the networks; take appropriate
actions before proceeding.

Use new IP addresses


To use new IP addresses, obtain a new IP address for each physical network that
the mobile partition is using, and then:
1. Configure the virtual network interfaces on the mobile partition, using the new
IP addresses that you obtained.
2. Verify network connectivity for each of the virtual network interfaces.
3. Unconfigure the physical network interfaces on the mobile partition.

158 IBM PowerVM Live Partition Mobility


Use existing IP addresses
To use existing IP addresses, record the network information for the physical
network interfaces on the mobile partition, and then:
1. Unconfigure the physical network interfaces on the mobile partition.
2. Configure the virtual network interfaces on the mobile partition, using the IP
addresses previously used by the physical interfaces.
3. Verify network connectivity for each of the virtual interfaces.

Figure 5-30 shows the mobile partition using a virtual network, with its physical
network interface unconfigured.

Source System Destination System

Mobile Partition rootvg

hdisk0 hdisk1 hdisk7 hdisk8

ent0 sisioa0 vscsi0 ent1

Hypervisor Hypervisor

vhost0 ent1 ent1

Destination VIOS
Source VIOS

hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-30 The mobile partition has unconfigured its physical network interface

Chapter 5. Advanced topics 159


5.6.6 Remove adapters from the mobile partition
Two procedures are available to remove adapters that cannot be migrated from
the mobile partition:
򐂰 If the mobile partition has any adapters that are marked as required, follow
“Process 1: Required adapters” on page 160.
򐂰 If adapters are not marked as required, follow “Process 2: No required
adapters” on page 160.

Process 1: Required adapters


To remove the adapters from the mobile partition:
1. Remove all physical adapters (including Integrated Virtual Ethernet) from the
profile of the mobile partition.
2. Remove all virtual serial adapters in slots 2 and above from the profile of the
mobile partition.
3. Shut down the mobile partition.
4. Activate the mobile partition with the modified profile.

Note: A reboot is not sufficient. The mobile partition must be shut down
and activated with the modified profile.

Process 2: No required adapters


To remove the adapters from the mobile partition:
1. At this point, no physical devices are in use on the mobile partition. Remove
all physical devices, along with their children by using the rmdev command.
For example, if the only physical devices in use are in slots pci0 and pci1, run
the following commands to remove the physical devices:
# rmdev -R -dl pci0
# rmdev -R -dl pci1
2. Remove all physical adapters from the mobile partition using dynamic logical
partitioning.
3. Remove all virtual serial adapters from slots 2 and above from the mobile
partition using dynamic logical partitioning.

160 IBM PowerVM Live Partition Mobility


Figure 5-31 shows the mobile partition with only virtual adapters.

Source System Destination System

Mobile Partition rootvg

hdisk7 hdisk8

vscsi0 ent1

Hypervisor Hypervisor

vhost0 ent1 ent1

Destination VIOS
Source VIOS

hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-31 The mobile partition with only virtual adapters

5.6.7 Ready to migrate


The mobile partition is now ready to be migrated. Close any virtual terminals on
the mobile partition, because they will lose connection when the partition
migrates to the destination system. Virtual terminals can be reopened when the
partition is on the destination system.

After the migration is complete, consider adding physical resources back to the
mobile partition, if they are available on the destination system.

Note: The active mobile partition profile is created on the destination system
without any references to any physical I/O slots that were present in your
profile on the source system. Any other mobile partition profiles are copied
unchanged.

Chapter 5. Advanced topics 161


Figure 5-32 shows the mobile partition migrated to the destination system.

Source System Destination System

Mobile Partition rootvg

hdisk7 hdisk8

vscsi0 ent1

Hypervisor Hypervisor

ent1 vhost0 ent1

Destination VIOS
Source VIOS

hdisk5 hdisk6 hdisk3 hdisk4

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

SCSI Storage
Enclosure Subsystem

Figure 5-32 The mobile partition on the destination system

5.7 The command-line interface


The HMC provides a command-line interface (CLI) and an easy-to-use GUI for
Live Partition Mobility. This CLI allows you to script frequently performed
operations. This automation saves time and reduces the chance of errors.

Several existing HMC commands for Live Partition Mobility have been updated to
support the latest mobility features. The commands are migrlpar, lslparmigr,
and lssyscfg.

Note: An existing command, migrcfg, is used to push partition configuration


data that is held on the HMC to a managed system. Despite its migr prefix, this
operation is distinct from the Live Partition Mobility described in this book.

162 IBM PowerVM Live Partition Mobility


The HMC commands can be launched either locally on the HMC or remotely by
using the ssh -l <hmc> <hmc_command> command.

Tip: Use the ssh-keygen command to create the public and private key-pair on
your client. Then add these keys to the HMC user’s key-chain by using the
mkauthkeys --add command on the HMC.

Command conventions
The commands follow the HMC command conventions, which are:
򐂰 Single character parameters are preceded by a single dash (-).
򐂰 Multiple character parameters are preceded by a double dash (--).
򐂰 All filter and attribute names are lower case, with underscores joining words
together, for example vios_lpar_id.

5.7.1 The migrlpar command


This command is used to validate, initiate, stop, and recover a partition migration.
The same command, syntax, and options are used for both active and inactive
migrations. The HMC determines which type of migration to perform based on
the state of the partition referenced in the command. See the “Command
conventions” on page 163. The command syntax is:
migrlpar -o m | r | s | v
-m <managed system>
[-t <managed system>]
[--ip <IP address>]
[-u <user ID>
-p <partition name> | --id <partitionID>
[-n <profile name>]
[-f <input data file> | -i <input data>]
[-w <wait time>]
[-d <detail level>]
[-v]
[--force]
[--help]

The flags used in this command are:


-o The operation to perform, which can be:
m - validate and migrate
r - recover
s - stop
v - validate

Chapter 5. Advanced topics 163


-m <managed system> The source managed system’s name
-t <managed system> The destination managed system’s name
-p <partition name> The partition on which to perform the operation
--ip <IP address> The IP address or host name of the target managed
system's HMC
-u <user ID> The user ID to use on the target managed system's
HMC
--id <partitionID> The ID of the partition on which to perform the
operation
-n <profile name> The name of the partition profile to be created on the
destination.
-f <input data file> The name of the file containing input data for this
command. Use either of the following formats:
attr_name1=value,attr_name2=value,...
attr_name1=value1,value2,...
-i <input data> The input data for this command, typically the virtual
adapter mapping from source to destination or the
destination shared-processor pool. This format is the
same format as the input data file of the -f option.
-w <wait time> The time, in minutes, to wait for any operating system
command to complete
-d <detail level> The level of detail requested from operating system
commands; values range from 0 (none) to 5 (highest)
-v Verbose mode
--force Force the recovery. This option should be used with
caution.
--help Prints a help message

Input data format


The data given in the file specified with the -f flag, or the data specified with -i,
must be in comma-separated value (CSV) format. These flags can be used with
the migrate (-o m) and the validate (-o v) operations. The supported attributes
are: virtual_scsi_mappings, virtual_fc_mappings, source_msp_name,
source_msp_id, dest_msp_name, and dest_msp_id, shared_proc_pool_id,
shared_proc_pool_name.

164 IBM PowerVM Live Partition Mobility


The data specified with the virtual_scsi_mappings attribute consists of one or
more source virtual SCSI adapter to destination virtual SCSI adapter mappings
in the format:
client_virtual_slot_num/dest_vios_lpar_name/dest_vios_lpar_id

The data format specified with the virtual_fc_mappings attribute mirrors the
format of the virtual_scsi_mappings attribute as it relates to virtual Fibre Channel
adapter mappings for N_Port ID Virtualization (NPIV).

Validate operation (-o v)


The migrlpar -o v command validates the proposed migration, returning a
non-zero return code if the validate operation finds any configuration errors that
will cause the migration to fail. Warnings not accompanied by an error do not
cause the validate operation to fail. The command output is a list of errors and
warnings of every potential or real problem that the HMC finds. The HMC does
not stop the validation process at the first error; it continues processing as far as
possible in an attempt to identify all problems that might invalidate the migration.

Migrate operation (-o m)


The migrlpar -o m command initiates a migration. The same command is used
for inactive or active migrations; the HMC chooses the appropriate migration type
based on the state of the given partition.

Stop operation (-o s)


If the migration is in a stoppable state, the migrlpar -o s command halts the
specified migration and roll back any changes. This command can be executed
only on the HMC upon which the migration was started.

Recovery operation (-o r)


In the event of a lost connection or a migration failure, you may use the recovery
operation to restore a partially migrated state by using the migrlpar -o r
command. Depending on what point in the migration the connection was lost, the
recovery command either rolls-back the operation (undoes the changes on the
destination system) or completes the migration.

Examples
To migrate the partition myLPAR from the system srcSystem to the destSystem
using the default MSPs and adapter maps, use the following command:
$ migrlpar -o m srcSystem -t destSystem -p myLPAR

In an environment with multiple mover service partitions on the source and


destination, you can specify which mover service partitions to use in a validation

Chapter 5. Advanced topics 165


or migration operation. The following command validates the migration in the
previous example with specific mover service partitions. Note that you can use
both partition names and partition IDs on the same command:
$ migrlpar -o v -m srcSystem -t destSystem -p myLPAR \
-i source_msp_id=2,dest_msp_name=VIOS2_L10

When the destination system has multiple shared-processor pools, you can
stipulate to which shared-processor pool the moving partition will be assigned at
the destination with either of the following commands:
򐂰 $ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i
"shared_proc_pool_id=1"
򐂰 $ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i
"shared_proc_pool_name="DefaultPool"

The capacity of the chosen shared-processor pool must be sufficient to


accommodate the migrating partition otherwise the migration operation will fail.

The syntax to stop a partition migration is:


$ migrlpar -o s -m srcSystem -p MyLPAR

The syntax to recover a failed migration is:


$ migrlpar -o r -m srcSystem -p MyLPAR

You can use the --force flag on the recover command, but you should only when
the partition migration fails, leaving the partition definition on both the source and
destination systems.

5.7.2 The lslparmigr command


Use the lslparmigr command to show the state of running migrations or to show
managed mover service partitions, Virtual I/O Servers, and adapter mappings
that might be used for a partition migration. The command syntax is:
lslparmigr -r lpar | manager | msp | procpool | sys | virtualio
[-m <managed system>]
[-t <managed system>]
[--ip <IP address>]
[-u <user ID>]
[--filter <filter data>]
[-F [<attribute names>]
[--header]
[--help]

166 IBM PowerVM Live Partition Mobility


The flags used in this command are:
-r The type of resources for which to list information:
lpar - partition
manager - Hardware Management Console (HMC)
msp - mover service partitions
procpool - shared processor pool
sys - managed system (CEC)
virtualio - virtual I/O
-m <managed system> The source managed system's name.
-t <managed system> The destination managed system's name.
--ip <IP address> The IP address or host name of the target managed
system's HMC.
-u <user ID> The user ID to use on the target managed system's
HMC.
--filter <filter data>
Filters the data to be listed in CSV format.
Use either of the following formats:
filter_name1=value,filter_name2=value,...
filter_name1=value1,value2,...
Valid filter names are: lpar_ids and lpar_names
The filters are mutually exclusive. This parameter is
not valid with -r sys, is optional with -r lpar, and is
required with -r msp and -r virtualio. With -r msp and
-r virtualio, exactly one partition name or ID must be
specified and the partition must be an AIX or Linux
partition.
-F [<attribute names>] Comma-separated list of the names of the attributes
to be listed. If no attribute names are specified, then
all attributes will be listed.
--header Prints a header of attribute names when -F is also
specified
--help Prints a help message

Partition information (-r lpar)


The lslparmigr -r lpar command displays partition migration information.
Without the -F flag, the attributes that the command lists are lpar_name, lpar_id,
migration_state, migration_type, source_sys_name, dest_sys_name,
source_lpar_id, dest_lpar_id, source_msp_name, source_msp_id,
dest_msp_name, and dest_msp_id.

Chapter 5. Advanced topics 167


Remote migration information (-r manager)
The lslparmigr -r manager command displays information regarding the
current HMC’s ability to migrate a mobile partition to a system managed by a
remote HMC.

Mover service partition information (-r msp)


The lslparmigr -r msp command displays the mover service partition-to-mover
service partition relationship between the source and destination systems. For
each mover service partition the source system, the command displays the
mover service partition on the destination system it is able to communicate with.
The attributes listed are source_msp_name, source_msp_id, dest_ip_names,
and dest_msp_ids. If no MSPs are on the source system, there will be no data.

Shared processor pool information (-r procpool)


The lslparmigr -r procpool command displays all available shared processor
pools on the destination system which can be used by the mobility client. The
command shows the possible processor pool names and IDs. Attributes listed
are shared_proc_pool_ids and shared_proc_pool_names.

System information (-r sys)


The lslparmigr -r sys command displays all partition migration information for
a managed system. Attributes listed are inactive_lpar_migration_capable,
active_lpar_migration_capable, num_inactive_migrations_supported,
num_active_migrations_supported, num_inactive_migrations_in_progress, and
num_active_migrations_in_progress.

Virtual I/O Server information (-r virtualio)


The lslparmigr -r virtualio command displays information pertaining to the
candidate destination Virtual I/O Servers. The command shows the possible and
suggested mappings between the source virtual client adapters and the
destination virtual server adapters for a given migration for both virtual SCSI and
virtual Fibre Channel adapters.

Examples
The following examples illustrate how this command is used.

System migration information


To display the migration capabilities of a system, use the following syntax:
$ lslparmigr -r sys -m mySystem

168 IBM PowerVM Live Partition Mobility


This command produces:
inactive_lpar_mobility_capable=1,num_inactive_migrations_supported=40,n
um_inactive_migrations_in_progress=1,active_lpar_mobility_capable=1,num
_active_migrations_supported=40,num_active_migrations_in_progress=0

In this example, we can see that the system is capable of both active and inactive
migration and that there is one inactive partition migration in progress. By using
the -F flag, the same information is produced in a CSV format:
$ lslparmigr -r sys -m mySystem -F

This command produces:


1,40,1,1,40,0

These attribute values are the same as in the preceding example, without the
attribute identifier. This format is appropriate for parsing or for importing in to a
spreadsheet. Adding the --header flag prints column headers on the first line:
$ lslparmigr -r sys -m mySystem -F --header

This command produces:


inactive_lpar_mobility_capable,num_inactive_migrations_supported,
num_inactive_migrations_in_progress,active_lpar_mobility_capable,
num_active_migrations_supported,num_active_migrations_in_progress
1,40,1,1,40,0

On a terminal, the header is printed on a single line.

If you are only interested in specific attributes, then you can specify these as
options to the -F flag. For example, if you want to know just the number of active
and inactive migrations in progress, use the following command:
$ lslparmigr -r sys -m mySystem -F \
num_active_migrations_in_progress,num_inactive_migrations_in_progress

This command produces the following results, which indicates that there are no
active migrations and one active migration running:
0,1

If you want a space instead of a comma to separate values, surround the


attributes with double quotes.

Remote migration information


To show the remote migration information of the HMC, use the -r manager option:
$ lslparmigr -r manager

Chapter 5. Advanced topics 169


This option produces:
remote_lpar_mobility_capable=1

Here, we see that the command supplies only one attribute for the user on the
HMC from which it is executed. The attribute remote_lpar_mobility_capable
displays a value of 1 if the HMC has the ability to perform migrations to a remote
HMC. Conversely, a value of 0 indicates that the HMC is incapable of remote
migrations.

You may also use the -F flag followed by the attribute to limit the output of the
command to the value. For example, the command:
$ lslparmigr -r manager -F remote_lpar_mobility_capable

This command produces:


1

Partition migration information


To show the migration information of the logical partitions of a managed system,
use the -r lpar option:
$ lslparmigr -r lpar -m mySystem

This option produces:


name=QA,lpar_id=4,migration_state=Not Migrating
name=VIOS1_L10,lpar_id=1,migration_state=Not Migrating
name=PROD,lpar_id=3,migration_state=Migration Starting,
migration_type=inactive,dest_sys_name=9117-MMA-SN100F6A0-L9,
dest_lpar_id=65535

Here, we see that the system mySystem is hosting three partitions, QA,
VIOS1_L10, and PROD. Of these, the PROD partition is in the Starting state of
an inactive migration as indicated by the migration_state and migration_type
attributes. When the command was run, the migration the ID of the destination
partition had not been chosen, as seen by the 65535 value for the dest_lpar_id
parameter.

Use the -filter flag to limit the output to a given set of partitions with either the
lpar_names or the lpar_ids attributes:
$ lslparmigr -r lpar -m mySystem --filter lpar_ids=3

This flag produces:


name=PROD,lpar_id=3,migration_state=Migration Starting,
migration_type=inactive,dest_sys_name=9117-MMA-SN100F6A0-L9,
dest_lpar_id=7

170 IBM PowerVM Live Partition Mobility


Here, the output information is limited to the partition with ID=3, which is the one
performing the inactive migration. Here, we see that the dest_lpar_id has now
been chosen.

You can use the -F flag to generate the same information in CSV format or to limit
the output:
$ lslparmigr -r lpar -m mySystem --filter lpar_ids=3 -F

This flag produces:


PROD,3,Migration Starting,inactive,9117-MMA-SN101F170-L10,
9117-MMA-SN100F6A0-L9,3,7,,unavailable,,unavailable

Here the -F flag, without additional parameters, has printed all the attributes. In
the example, the last four fields of output pertain to the MSPs; because the
partition in question is undergoing an inactive migration, no MSPs are involved
and these fields are empty. You can use the --header flag with the -F flag to print
a line of column headers at the start of the output.

Mover service partition information


The -r msp option shows the possible mover service partitions for a migration:
$ lslparmigr -r msp -m srcSystem -t destSystem --filter lpar_names=TEST

This option produces:


source_msp_name=VIOS1_L9,source_msp_id=1,
"dest_msp_names=VIOS1_L10","dest_msp_ids=1"

Here, we see that if we move the partition TEST from srcSystem to destSystem,
then:
򐂰 There is a mover service partition on the source (VIOS1_L9).
򐂰 There is a mover service partition on the destination (VIOS1_L10).
򐂰 If the migration uses VIOS1_L9 on the source, VIOS1_L10 can be used on
the destination.

This approach gives one possible mover service partition combination for the
migration.

Shared Processor Pool


Use the -r procpool option to display the shared processor pools capable of
hosting the client partition on the destination server. The -F flag (optional) can be
used to format or limit the output, as follows:
$ lslparmigr -r procpool -m srcSystem -t destSystem \
--filter lpar_names=TEST -F shared_proc_pool_ids

Chapter 5. Advanced topics 171


The flag produces:
1,0

This output indicates that processor pool IDs 1 and 0 are capable of hosting the
client partition called TEST. The command requires the -m, -t, and --filter flags.
The --filter flag requires that you either use the lpar_ids or lpar_names attributes
to identify the client partition. You may only specify one client partition at a time.

The command can also be used to identify shared-processor pools available on


remote HMC migrations with the --ip and -u flags to specify the remote HMC and
remote user ID, respectively. Also, without the -F flag you are given detail output
of the attribute and values, as shown in the following example, with the addition
of remote HMC specification and using lpar_ids to specify the client partition:
$ lslparmigr -r procpool -m srcSystem --ip 9.3.5.180 \
-u hscroot -t destSystem --filter lpar_ids=2

This command shows that we are communicating with an HMC with IP address
9.3.5.180 using the HMC’s User ID hscroot. The command then checks the
remote HMC for the destSystem-managed system for possible processor pools.
It produces the output:
"shared_proc_pool_ids=1,0","shared_proc_pool_names=SharedPool01,Default
Pool"

Here, the system is showing that two shared-processor pools are possible
destinations for the client partition.

Virtual I/O Server


Use the -r virtualio option to display the possible virtual adapter mappings for a
given migration. The -F flag can be used to format or limit the output, as follows:
$ lslparmigr -r virtualio -m srcSystem -t destSystem \
--filter lpar_names=TEST -F suggested_virtual_scsi_mappings

The flag produces:

40/VIOS1_L10/1

This output indicates that if you migrate the client partition called TEST from the
srcSystem to destSystem, then the suggested virtual SCSI adapter mapping
would be to map the client virtual adapter in slot 40 to the Virtual I/O Server
called VIOS1_L10, which has a partition ID of 1, on the destination system.

172 IBM PowerVM Live Partition Mobility


5.7.3 The lssyscfg command
The lssyscfg command is a native HMC command that supports Live Partition
Mobility.

The lssyscfg -r sys command displays two attributes,


active_lpar_mobility_capable and inactive_lpar_mobility_capable. These
attributes can have a value of either 0 (incapable) or 1 (capable).

The lssyscfg -r lpar command displays the msp and time_ref partition
attributes on Virtual I/O Server partitions that are capable of participating in
active partition migrations. The msp attribute has a value of 1 when enabled as
mover service partition and 0 when it is not enabled.

5.7.4 The mkauthkeys command


The mkauthkeys command is used for retrieval, removal, and validation of SSH
authentication to a remote system. The command syntax is:
mkauthkeys -a | --add | -g | -r | --remove | --test
[-ip <IP address>]
[-u <user ID>]
[--passwd <password>]
[-t <key type>]
[<key string>]
[--help]

The flags used in this command are:


-a --add Adds SSH key as an authorized key
-g Gets the user's SSH public key
-r --remove Removes SSH key from user's authorized key list
--test Verifies authentication to a remote HMC.
--ip <IP address> The IP address or host name of a remote HMC with
which to exchange authentication keys.
-u <user ID> The ID of a user whose authentication keys are to be
managed.
--passwd <password> The password to use to log on to the remote HMC. If this
parameter is omitted, you will be prompted for the
password.
-t <key type> The type of SSH authentication keys:
rsa - RSA authentication
dsa - DSA authentication

Chapter 5. Advanced topics 173


<key string> The SSH key string to add or remove.
--help Prints this help.

Examples
To get the remote HMC user's SSH public key, you may simply use the -g flag:
$ mkauthkeys --ip rmtHostName -u hscroot -g

You may also specify a preferred authentication method. To choose between


DSA authentication or RSA authentication key usage, you may use the -t flag in
either case:
򐂰 $ mkauthkeys --ip rmtHostName -u hscroot -t rsa
򐂰 $ mkauthkeys --ip rmtHostName -u hscroot -t dsa

In some cases, you may choose to remove the authentication keys, which you
can do by using the mkauthkeys command with the -r flag:
$ mkauthkeys -r ccfw@rmtHostName

The HMC stores the key as user called ccfw. It is not stored as the user ID you
specified in the steps to retrieve the authentication keys. Also note that the
remote HMC's host name has to be specified in this command. If DNS is unable
to resolve the host name and you used the IP address to configure the
authentication, only then will you use the actual IP address in the place of
rmtHostName.

The --test flag allows you to check whether authentication is properly configured
to the remote HMC:
$ mkauthkeys --ip rmtHostName -u hscroot --test

The command returns the following error if keys were not configured properly:
HSCL3653 The Secure Shell (SSH) communication configuration between the
source and target Hardware Management Consoles has not been set up
properly for user hscroot. Please run the mkauthkeys command to set up
the SSH communication authentication keys.

5.7.5 A more complex example


Example 5-4 on page 176 provides a more complete example of how the Live
Partition Mobility commands can be used. This script fragment moves all the
migratable partitions from one system to another. The example assumes that two
environment variables, SRC_SERVER and DEST_SERVER, point to the system
to empty and the system to load, respectively.

174 IBM PowerVM Live Partition Mobility


The algorithm starts by listing all the partitions on SRC_SERVER. It then filters
out any Virtual I/O Server partitions and partitions that are already migrating. For
each remaining partition, it invokes a migration operation to DEST_SERVER. In
this example, the migrations take place sequentially. Running them in parallel is
acceptable if there are no more than four concurrent active migrations per mover
service partition. This is an exercise left to the reader.

How it works
The script starts by checking that both the source and destination systems are
mobility capable. For this, it uses the new attributes given in the lssyscfg
command. It then uses the lslparmigr command to list all the partitions on the
system. It uses this list as an outer loop for the rest of the script. The program
then performs a number of elementary checks:
򐂰 The source and destination must be capable of mobility.
The lssyscfg command shows the mobility capability attribute.
򐂰 Only partitions of type aixlinux can be migrated.
The script uses the lssyscfg command to ascertain the partition type.
򐂰 Determines whether to avoid migrating a partition that is already migrating.
The script reuses the lslparmigr command for this.
򐂰 Validate the partition migration.
The script uses the migrlpar -v and checks the return code.

If all the checks pass, the migration is launched with the migrlpar command. The
code snippet does some elementary error checking. If migrlpar returns a
non-zero value, a recovery is attempted using the migrlpar -o r command.

See Example 5-4 on page 176.

Chapter 5. Advanced topics 175


Example 5-4 Script fragment to migrate all partitions on a system
#
# Get the mobility capabilities of the source and destination systems
#
SRC_CAP=$(lssyscfg -r sys -m $SRC_SERVER \
-F active_lpar_mobility_capable,inactive_lpar_mobility_capable)
DEST_CAP=$(lssyscfg -r sys -m $DEST_SERVER \
-F active_lpar_mobility_capable,inactive_lpar_mobility_capable)

#
#
# Make sure that they are both capapble of active and inactive migration
#
if [ $SRC_CAP = $DEST_CAP ] && [ $SRC_CAP = "1,1" ]
then
#
# List all the partitions on the source system
#
for LPAR in $(lslparmigr -r lpar -m $SRC_SERVER -F name)
do
#
# Only migrate “aixlinux” partitions. VIO servers cannot be migrated
#
LPAR_ENV=$(lssyscfg -r lpar -m $SRC_SERVER \
--filter lpar_names=$LPAR -F lpar_env)
if [ $LPAR_ENV = "aixlinux" ]
then
#
# Make sure that the partition is not already migrating
#
LPAR_STATE=$(lslparmigr -r lpar -m $SRC_SERVER --filter lpar_names=$LPAR -F
migration_state)
if [ "$LPAR_STATE" = "Not Migrating" ]
then
#
# Perform a validation to see if there’s a good chance of success
#
migrlpar -o v -m $SRC_SERVER -t $DEST_SERVER -p $LPAR
RC=$?
if [ $RC -ne 0 ]
then
echo "Validation failed. Cannont migrate partition $LPAR"
else
#
# Everything looks good, let’s do it...
#
echo "migrating $LPAR from $SRC_SERVER to $DEST_SERVER"
migrlpar -o m -m $SRC_SERVER -t $DEST_SERVER -p $LPAR

176 IBM PowerVM Live Partition Mobility


RC=$?
if [ $RC -ne 0 ]
then
#
# Something went wrong, let’s try to recover
#
echo "There was an error RC = $RC . Attempting recovery"
migrlpar -o r -m $SRC_SERVER -p $LPAR
break
fi
fi
fi
fi
done
fi

5.8 Migration awareness


A migration-aware application is one that is designed to recognize and
dynamically adapt to changes in the underlying system hardware after being
moved from one system to another.

Most applications do not require any changes to work correctly and efficiently
with Live Partition Mobility. Certain applications can have dependencies on
characteristics that change between the source and destination servers and
other applications may adjust their behavior to facilitate the migration.

Applications that probably should be made migration-aware include:


򐂰 Applications that use processor and memory affinity characteristics to tune
their behavior because affinity characteristics may change as a result of
migration. The externally visible behavior remains the same, but performance
variations, for better or worse, can be observed because of different server
characteristics.
Applications that use processor binding maintain their binding to the same
logical processors across migrations, but in reality the physical processors will
have changed. Binding is usually done to maintain hot caches, but clearly the
physical processor move requires a warming of the cache hierarchy on the
destination system. This process usually occurs very quickly and should not
be visible to the users.
򐂰 Applications that are tuned for a given cache architecture, such as hierarchy,
size, line-size, and associativity. These applications are usually limited to
high-performance computing applications, but the just-in-time (JIT) compiler

Chapter 5. Advanced topics 177


of the IBM Java™ Virtual Machine is also optimized for the cache-line size of
the processor on which it was launched.
򐂰 Performance analysis, capacity planning, and accounting tools and their
agents should also be made migration-aware because the processor
performance counters may change between the source and destination
servers, as may the processor type and frequency. Additionally, tools that
calculate an aggregate system load based on the sum of the loads in all
hosted partitions must be aware that a partition has left the system or that a
new partition arrived.
򐂰 Workload managers (WLM)

An application that is migration-aware might perform the following actions:


򐂰 Keep track of changes to system characteristics, such as cache-line size or
serial numbers, and modify tuning or behavior accordingly.
򐂰 Terminate the application on the source system and restart it on the
destination.
򐂰 Reroute workloads to another system.
򐂰 Clean-up system-specific buffers and logs.
򐂰 Refuse new incoming requests or delay pending operations.
򐂰 Increase time-out thresholds, such as the PowerHA heartbeat.
򐂰 Block the sending of partition shutdown requests.
򐂰 Refuse a partition migration in the check phase to prevent a non-migratable
application from being migrated.

5.9 Making applications migration-aware


Mobility-awareness can be built-in to an application using the standard AIX
dynamic reconfiguration notification infrastructure. This infrastructure offers two
mechanisms for alerting applications about configuration changes.
򐂰 Using the SIGRECONFIG signal and the dynamic reconfiguration APIs.
򐂰 Registering scripts with the AIX dynamic reconfiguration infrastructure.

Using the SIGRECONFIG and dynamic reconfiguration APIs requires additional


code in your applications. Dynamic logical partitioning (dynamic LPAR) scripts
allow you to add awareness to applications for which you do not have the source
code.

178 IBM PowerVM Live Partition Mobility


5.9.1 Migration phases
The dynamic LPAR notification framework defines three operational phases:
򐂰 The check phase notification allows applications to signal their readiness to
migrate. The check phase allows applications with root authority to refuse a
migration.
򐂰 The prepare phase notification alerts applications that the migration (or
dynamic reconfiguration) is imminent. This phase allows applications to make
any necessary steps to help with the process.
򐂰 The post phase notification alerts applications that the migration (or dynamic
reconfiguration) is complete. This allows applications to take any recovery
steps to resume service on the destination system.

The check and prepare phases take place on the source system; the post phase
occurs on the destination after the device tree and ODM have been updated to
reflect the destination system configuration.

5.9.2 Making programs migration aware using APIs


Application programming interfaces are provided to make programs
migration-aware. The SIGRECONFIG signal is sent to all applications at each of
the three migration phases. Applications can watch (trap) this signal and use the
DLPAR-API system calls to learn more about the operation in progress. Be aware
that if your program does trap the SIGRECONFIG signal, it will be notified of all
dynamic-reconfiguration operations, not just Live Partition Mobility events.

Note: An application must not block the SIGRECONFIG signal and the signal
must be handled in a timely manner. The dynamic LPAR and Live Partition
Mobility infrastructure wait a short period of time for a reply from applications.
If no response occurs after this amount of time, the system assumes all is well
and proceeds to the next phase. You can speed up a migration or dynamic
reconfiguration operation by acknowledging the SIGRECONFIG event even if
your application takes no action.

Applications must perform the following operations to be notified of a Live


Partition Mobility operation:
1. Catch the SIGRECONFIG signal by using the sigaction() or sigwait() system
calls. The default action is to ignore the signal.
2. Control the signal mask of at least one of the application’s threads and the
priority of the handling thread such that the signal can be delivered and
handled promptly.

Chapter 5. Advanced topics 179


3. Uses the dr_reconfig() system call, through the signal handler, to determine
the nature of the reconfiguration event and other pertinent information. For
the check phase, the application should pass DR_RECONFIG_DONE to
accept a migration or DR_EVENT_FAIL to refuse. Only applications with root
authority may refuse a migration.

The dr_reconfig() system call has been modified to support partition migration.
The returned dr_info structure includes the following bit-fields:
򐂰 migrate
򐂰 partition

These fields are for the new migration action and the partition object that is the
object of the action.

The code snippet in Example 5-5 shows how dr_reconfig() might be used. This
code would run in a signal-handling thread.

Example 5-5 SIGRECONFIG signal-handling thread


#include <signal.h>
#include <sys/dr.h>
:
:
struct dr_info drInfo; // For event-related information
struct sigset_t signalSet; // The signal set to wait on
int signalId; // Identifies signal was received
int reconfigFlag // For accepting or refusing the DR
int rc; // return code

// Initialise the signal set


SIGINITSET(signalSet);

// Add the SIGRECONFIG to the signal set


SIGADDSET(signalSet, SIGRECONFIG);

// loop forever
while (1) {
// Wait on signals in signal set
sigwait(&signalSet, &signalId);
if (signalID == SIGRECONFIG) {
if (rc = dr_reconfig(DR_QUERY, &drInfo)) {
// handle the error
} else {
if (drInfo.migrate) {
if {drInfo.check) {
/*

180 IBM PowerVM Live Partition Mobility


* If migration OK reconfigFlag = DR_RECONFIG_DONE
* If migration NOK reconfigFlag = DR_EVENT_FAIL
*/
rc = dr_reconfig(reconfigFlag, &drInfo);
} else if (drInfo.pre) {
/*
* Prepare the application for migration
*/
rc = dr_reconfig(DR_RECONFIG_DONE, &drInfo);
} else if (drInfo.post) {
/*
* We’re being woken up on the destination
* Check new environment and resume normal service
*/
} else {
// Handle the error cases
}
} else {
// It’s not a migration. Handle or ignore the DR
}
}
}

You can use the sysconf() system call to check the system configuration on the
destination system. The _system_configuration structure has been modified to
include the following fields:
icache_size Size of the L1 instruction cache
icache_asc Associativity of the L1 instruction cache
dcache_size Size of the L1 data cache
dcache_asc Associativity of the L1 data cache
L2_cache_size Size of the L2 cache
L2_cache_asc Associativity of the L2 cache
itlb_size Instruction translation look-aside buffer size
itlb_asc Instruction translation look-aside buffer associativity
dtlb_size Data translation look-aside buffer size
dtlb_asc Data translation look-aside buffer associativity
tlb_attrib Translation look-aside buffer attributes
slb_size Segment look-aside buffer size

Chapter 5. Advanced topics 181


These fields are updated after the partition has arrived at the destination system
to reflect the underlying physical processor characteristics. In this fashion,
applications that are moved from one processor architecture to another can
dynamically adapt themselves to their execution environment. All new processor
features, such as the single-instruction multiple-data (SIMD) and decimal floating
point instructions, are exposed through the _system_configuration structure and
the lpar_get_info() system call.

The lpar_get_info() call returns two capabilities, defined in <sys/dr.h>:


LPAR_INFO1_MSP_CAPABLE If the partition is a Virtual I/O Server
partition, this capability indicates the
partition is also a mover service partition.
LPAR_INFO1_PMIG_CAPABLE Indicates whether the partition is capable of
migration.

5.9.3 Making applications migration-aware using scripts


Dynamic reconfiguration scripts allow you to cleanly quiesce and restart your
applications over a migration. You can register your own scripts with the dynamic
reconfiguration infrastructure by using the drmgr command. The command
copies the scripts to a private repository, the default location of which is
/usr/lib/dr/scripts/all.

The scripts can be implemented in any interpreted (scripted) or compiled


language. The drmgr command is detailed in the IBM InfoCenter at:
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/
com.ibm.aix.cmds/doc/aixcmds2/drmgr.htm

The syntax of the dynamic reconfiguration scripts is:


[env_variable1=value ...] scriptname command [param1 ...]

The input variables are set as environment variables on the command line,
followed by the name of the script to be invoked and any additional parameters.

182 IBM PowerVM Live Partition Mobility


Live Partition Mobility introduces four commands, listed in Table 5-1.

Table 5-1 Dynamic reconfiguration script commands for migration


Command and parameter Description

checkmigrate <resource> Used to indicate whether the a migration should


continue or not. A script might indicate that a migration
should not continue if the application is dependent upon
an invariable execution environment. The script is
called with this command at the check-migration phase.

premigrate <resource> At this point the migration will be initiated. The script
can reconfigure or suspend an application to facilitate
the migration process. The script is called with this
command at the prepare-migration phase.

postmigrate <resource> This command is called after migration has completed.


The script can reconfigure or resume applications that
were changed or suspended in the prepare phase. The
script is called with this command in the post-migration
phase.

undopremigrate <resource> If an error is encountered during the check phase, the


script is called with this command to roll back any
actions that might have been taken in the checkmigrate
command in preparation for the migration.

In addition to the script commands, a pmig resource type indicates a partition


migration operation. The register command of your dynamic LPAR scripts can
choose to handle this resource type. A script supporting partition migration
should write out the name-value pair DR_RESOURCE=pmig when it is invoked
with the register command. A dynamic LPAR script can be registered to support
only partition migration. No new environment variables are passed to the
dynamic LPAR scripts for Live Partition Mobility support.

The code in Example 5-6 on page 184 shows a Korn shell script that detects the
partition migration reconfiguration events. For this example, the script simply logs
the called command to a file.

Chapter 5. Advanced topics 183


Example 5-6 Outline Korn shell dynamic LPAR script for Live Partition Mobility
#!/usr/bin/ksh

if [[ $# -eq 0 ]]
then
echo "DR_ERROR=Script usage error"
exit 1
fi

ret_code=0
command=$1

case $command in
scriptinfo )
echo "DR_VERSION=1.0"
echo "DR_DATE=27032007"
echo "DR_SCRIPTINFO=partition migration test script"
echo "DR_VENDOR=IBM"
echo "SCRIPTINFO" >> /tmp/migration.log;;

usage )
echo "DR_USAGE=$0 command [parameter]"
echo "USAGE" >> /tmp/migration.log;;

register )
echo "DR_RESOURCE=pmig";;
echo "REGISTER" >> /tmp/migration.log;;

checkmigrate )
echo "CHECK_MIGRATE" >> /tmp/migration.log;;

premigrate )
echo "PRE_MIGRATE" >> /tmp/migration.log

postmigrate )
echo "POST_MIGRATE" >> /tmp/migration.log;;

undopremigrate )
echo "UNDO_CHECK_MIGRATE" >> /tmp/migration.log;;

* )
echo "*** UNSUPPORTED *** : $command" >> /tmp/migration.log;;
ret_code=10;;
esac

exit $ret_code

184 IBM PowerVM Live Partition Mobility


If the file name of the script is migrate.sh, then you would register it with the
dynamic reconfiguration infrastructure by using the following command.
# drmgr -i ./migrate.sh

Use the drmgr -l command to confirm script registration, as shown in


Example 5-7. In this example, you can see the output from the scriptinfo,
register, and usage commands of the shell script.

Example 5-7 Listing the registered dynamic LPAR scripts


# drmgr -l
DR Install Root Directory: /usr/lib/dr/scripts
Syslog ID: DRMGR
------------------------------------------------------------
/usr/lib/dr/scripts/all/migrate.sh partition migration test
script
Vendor:IBM, Version:1.0, Date:27032007
Script Timeout:10, Admin Override Timeout:0
Memory DR Percentage:100
Resources Supported:
Resource Name: pmig Resource Usage:
/usr/lib/dr/scripts/all/migrate.sh command [parameter]
------------------------------------------------------------

5.10 Making kernel extension migration aware


Kernel extensions can register to be notified of migration events. The notification
mechanism uses the standard dynamic reconfiguration mechanism, which is the
reconfig_register() kernel service. The service interface signature is:
int reconfig_register_ext(handler, actions, h_arg, h_token, name)
int (*handler)(void*, void*, long long action, void* dri);
long long actions;
void* h_arg;
ulong *h_token;
char* name;

The actions parameter supports the following values for mobility awareness:
򐂰 DR_MIGRATE_CHECK
򐂰 DR_MIGRATE_PRE
򐂰 DR_MIGRATE_POST
򐂰 DR_MIGRATE_POST_ERROR

Chapter 5. Advanced topics 185


The interface to the handler is:
int handler(void* event, void* h_arg, long long action, void*
resource_info);

The action parameter indicates the specific reconfiguration operation being


performed, for example, DR_MIGRATE_PRE. The resource_info parameter
maps to the following structure for partition migration:
struct dri_pmig {
int version;
int destination_lpid;
long long streamid
}

The version number is changed if additional parameters are added to this


structure. The destination_lpid and streamid fields are not available for the check
phase.

The interfaces to the reconfig_unregister() and reconfig_complete() kernel


services are not changed by Live Partition Mobility.

186 IBM PowerVM Live Partition Mobility


5.11 Virtual Fibre Channel
Virtual Fibre Channel is a virtualization feature. Virtual Fibre Channel uses
N_Port ID Virtualization (NPIV), and enables PowerVM logical partitions to
access SAN resources using virtual Fibre Channel adapters mapped to a
physical NPIV-capable adapter.

Figure 5-33 shows a basic configuration using virtual Fibre Channel and a single
Virtual I/O Server in the source and destination systems before migration occurs.

POWER6 System #1 POWER6 System #2


(Source system) (Destination system)
AIX Client Partition 1
(Mobile partition)

hdisk0

fcs0 ent0

POWER POWER
Hypervisor Hypervisor

vfchost0 ent1 ent1


virt virt

Virtual I/O Server


Virtual I/O Server

ent2 en2 en2 ent2


SEA if if SEA

fcs0 ent0 ent0 fcs0

HMC1 Ethernet Network

Storage Area Network

Shared Disk LUN


(Storage Device)
Physical
Volume LUN

Figure 5-33 Basic NPIV virtual Fibre Channel infrastructure before migration

Chapter 5. Advanced topics 187


After migration, the configuration is similar to the one shown in Figure 5-34.

POWER6 System #1 POWER6 System #2


(Source system) (Destination system)

AIX Client Partition 1


(Mobile partition)

hdisk0

ent0 fcs0

POWER POWER
Hypervisor Hypervisor

ent1 ent1 vfchost0


virt virt
Virtual I/O Server

Virtual I/O Server


ent2 en2 en2 ent2
SEA if if SEA

fcs0 ent0 ent0 fcs0

HMC1 Ethernet Network

Storage Area Network

Shared Disk LUN


(Storage Device)
Physical
Volume LUN

Figure 5-34 Basic NPIV virtual Fibre Channel infrastructure after migration

Benefits of NPIV and virtual Fibre Channel


The addition of NPIV and virtual Fibre Channel adapters reduces the number of
components and steps necessary to configure shared storage in a Virtual I/O
Server configuration:
򐂰 With virtual Fibre Channel support, you do not map individual disks in the
Virtual I/O Server to the mobile partition. LUNs from the storage subsystem
are zoned in a switch with the mobile partition’s virtual Fibre Channel adapter
using its worldwide port names (WWPNs), which greatly simplifies Virtual I/O
Server storage management.
򐂰 LUNs assigned to the virtual Fibre Channel adapter appear in the mobile
partition as standard disks from the storage subsystem. LUNs do not appear
on the Virtual I/O Server unless the physical adapters WWPN is zoned.

188 IBM PowerVM Live Partition Mobility


򐂰 Standard multipathing software for the storage subsystem is installed on the
mobile partition. Multipathing software is not installed into the Virtual I/O
Server partition to manage virtual Fibre Channel disks. The absence of the
software provides system administrators with familiar configuration
commands and problem determination processes in the client partition.
򐂰 Partitions can take advantage of standard multipath features, such as load
balancing across multiple virtual Fibre Channel adapters presented from dual
Virtual I/O Servers.

Required components
The mobile partition must meet the requirements described in Chapter 2, “Live
Partition Mobility mechanisms” on page 19. In addition, the following components
must be configured in the environment:
򐂰 An NPIV-capable SAN switch
򐂰 An NPIV-capable physical Fibre Channel adapter on the source and
destination Virtual I/O Servers
򐂰 HMC Version 7 Release 3.4, or later
򐂰 Virtual I/O Server Version 2.1 with Fix Pack 20.1, or later
򐂰 AIX 5.3 TL9, or later
򐂰 AIX 6.1 TL2 SP2, or later
򐂰 Each virtual Fibre Channel adapter on the Virtual I/O Server mapped to an
NPIV-capable physical Fibre Channel adapter
򐂰 Each virtual Fibre Channel adapter on the mobile partition mapped to a virtual
Fibre Channel adapter in the Virtual I/O Server
򐂰 At least one LUN mapped to the mobile partition’s virtual Fibre Channel
adapter

Mobile partitions may have virtual SCSI and virtual Fibre Channel LUNs.
Migration of LUNs between virtual SCSI and virtual Fibre Channel is not
supported at the time of publication.

See Chapter 2 in PowerVM Virtualization on IBM System p: Managing and


Monitoring, SG24-7590 for details about virtual Fibre Channel and NPIV
configuration.

Chapter 5. Advanced topics 189


5.11.1 Basic virtual Fibre Channel Live Partition Mobility preparation
This section describes how to set up and migrate a partition that is using virtual
Fibre Channel disk resources.

In addition to the requirements described in 4.1, “Basic Live Partition Mobility


environment” on page 90, the infrastructure must meet the following
requirements for migrations with virtual Fibre Channel adapters:
򐂰 The destination Virtual I/O Server must contain an NPIV-capable physical
Fibre Channel adapter that is connected to the NPIV-enabled port on the
switch that has connectivity to a port on a SAN device that has access to the
same targets as the client is using on the source CEC.
򐂰 On the source Virtual I/O Server partition, do not set the adapter as required
when you create a virtual Fibre Channel adapter. The virtual Fibre Channel
adapter must be solely accessible by the client adapter of the mobile partition.
򐂰 On the destination Virtual I/O Server partition, do not create any virtual Fibre
Channel adapters for the mobile partition. These are created automatically by
the migration function.
򐂰 The mobile partition’s virtual Fibre Channel WWPNs must be zoned on the
switch with the storage subsystem. You must include both WWPNs from each
virtual Fibre Channel adapter in the zone. The WWPN on the physical
adapter on the source and destination Virtual I/O Server does not have to be
included in the zone.

Figure 5-35 shows a mobile partition virtual Fibre Channel adapter example.

Figure 5-35 Client partition virtual Fibre Channel adapter WWPN properties

190 IBM PowerVM Live Partition Mobility


Figure 5-36 shows an example of virtual Fibre Channel properties for the Virtual
I/O Server.

Figure 5-36 Virtual Fibre Channel adapters in the Virtual I/O Server

Figure 5-37 shows an example of a virtual Fibre Channel properties for the
Virtual I/O Server, called a Server Fibre Channel Adapter.

Figure 5-37 Virtual I/O Server Fibre Channel adapter properties

The Virtual I/O Server lsdev and lsmap commands can be used to query the
virtual Fibre Channel configuration and mapping to the mobile partition, as
shown in Example 5-8 on page 192.

Chapter 5. Advanced topics 191


Example 5-8 Virtual I/O Server commands lsmap and lsdev virtual Fibre Channel output
$ lsmap -all -npiv
Name Physloc ClntID ClntName ClntOS
============= ================================== ====== ============== =======
vfchost0 U9117.MMA.101F170-V1-C16 2 mobile2 AIX

Status:LOGGED_IN
FC name:fcs3 FC loc code:U789D.001.DQDYKYW-P1-C6-T2
Ports logged in:2
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:fcs0 VFC client DRC:U9117.MMA.101F170-V2-C60-T1

$ lsdev -dev vfchost*


name status description
vfchost0 Available Virtual FC Server Adapter

192 IBM PowerVM Live Partition Mobility


5.11.2 Migration of a virtual Fibre Channel based partition
This section describes the steps necessary to migrate a mobile partition that
uses virtual Fibre Channel adapters.

After validating that the mobile partition can be migrated, follow steps 1 on
page 104 through 10 on page 112 in 4.3, “Preparing for an active partition
migration” on page 94. Instead of selecting virtual SCSI adapters in step 11 on
page 113, select the virtual Fibre Channel adapter assignment as shown in
Figure 5-38.

Figure 5-38 Selecting the virtual Fibre Channel adapter

Chapter 5. Advanced topics 193


Proceed with the remaining migration steps at step 12 on page 114 as described
in 4.3, “Preparing for an active partition migration” on page 94. The Summary
panel will look similar to Figure 5-39. Verify the settings you have selected and
then click Finish to begin the migration.

Figure 5-39 Virtual Fibre Channel migration summary window

194 IBM PowerVM Live Partition Mobility


When the migration is complete, verify that the mobile partition is on the
destination system. Figure 5-40 shows that the mobile partition is on the
destination system.

Figure 5-40 Migrated partition

5.11.3 Dual Virtual I/O Server and virtual Fibre Channel multipathing
With multipath I/O, the logical partition accesses the same storage data using
two different paths, each provided by a separate Virtual I/O Server.

Note: With NPIV-based disks, both paths can be active. For NPIV and virtual
Fibre Channel, the storage multipath code is loaded into the mobile
partition.The multipath capabilities depend on the storage subsystem type and
multipath code deployed in the mobile partition.

Chapter 5. Advanced topics 195


The migration is possible only if the destination system is configured with two
Virtual I/O Servers that can provide the same multipath setup. They both must
have access to the shared disk data, as shown in Figure 5-41.

AIX Client Partition 1

hdisk0

fcs0 fcs1

Hypervisor Hypervisor

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vfchost0 vfchost1
Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


Storage Storage Storage Storage
adapter adapter adapter adapter

Disk A

Storage
Subsystem

Figure 5-41 Dual VIOS and client multipath I/O to dual NPIV before migration.

196 IBM PowerVM Live Partition Mobility


When migration is complete, on the destination system, the two Virtual I/O
Servers are configured to provide the two paths to the data, as shown in
Figure 5-42.

AIX Client Partition 1

hdisk0

fcs0 fcs1

Hypervisor Hypervisor

vfchost0 vfchost1

Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2


vfchost0 vfchost1
Virtual I/O Server (VIOS) 1

Virtual I/O Server (VIOS) 2

Storage Storage Storage Storage


adapter adapter adapter adapter

Disk A

Storage
Subsystem

Figure 5-42 Dual VIOS and client multipath I/O to dual VIOS after migration.

If the destination system is configured with only one Virtual I/O Server, the
migration cannot be performed. The migration process would create two paths
using the same Virtual I/O Server, but this setup of having one virtual Fibre
Channel host device mapping the same LUNs on different virtual Fibre Channel
adapters is not recommended.

To migrate the partition, you must first remove one path from the source
configuration before starting the migration. The removal can be performed
without interfering with the running applications. The configuration becomes a
simple single Virtual I/O Server migration.

Chapter 5. Advanced topics 197


5.11.4 Live Partition Mobility with Heterogeneous I/O
This section describes how to migrate a partition that is currently using disk
resources presented on dedicated physical Fibre Channel adapters.

Partitions may not use physical adapters, Host Ethernet Adapters (HEA), and
non-default virtual serial adapters when participating in an active migration. Any
adapters of these types must be deconfigured and removed before migration.

For this scenario, we assume that you are beginning with a mobile partition that
is using a physical Fibre Channel adapter, and that a Virtual I/O Server exists
and is running on the source and destination systems. Another assumption is
that the Virtual I/O Server partitions have one physical NPIV-capable Fibre
Channel adapter, and the mobile partition’s storage subsystem LUNs are
available to the physical adapter currently used by the mobile partition.
Figure 5-43 describes our starting configuration. See Chapter 2 in PowerVM
Virtualization on IBM System p: Managing and Monitoring, SG24-7590 for
additional details about virtual Fibre Channel and NPIV configuration.

Source System Destination System

Mobile Partition

hdisk0

fcs1 ent0

Hypervisor Hypervisor

ent1 ent1
Destination VIOS
Source VIOS

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

Storage
Subsystem

Figure 5-43 The mobile partition using physical resources

198 IBM PowerVM Live Partition Mobility


Before proceeding, verify that the environment meets the requirements for Live
Partition Mobility with NPIV and virtual Fibre Channel as outlined in 5.11, “Virtual
Fibre Channel” on page 187. This scenario describes how to deal with partitions
containing physical adapters on the client so you may disregard the requirement
of having no physical adapters assigned to your mobile partition. In this case,
your physical adapter will be desired in the partition. You will use dynamic logical
partitioning (dynamic LPAR) to remove the adapter from the mobile partition prior
to migration.

Configure virtual Fibre Channel storage


To move your physical storage devices to virtual storage devices on your mobile
partition named mobile2:
1. Use dynamic LPAR to add a virtual Fibre Channel server adapter to the
running source Virtual I/O Server. Figure 5-44 shows the resulting virtual
Fibre Channel adapter properties.

Figure 5-44 Virtual Fibre Channel server adapter properties

2. Use dynamic LPAR to add a virtual Fibre Channel server adapter, with the
same properties from the previous step, to the activated mobile partition.

Chapter 5. Advanced topics 199


Figure 5-45 shows the virtual Fibre Channel client adapter properties.

Figure 5-45 Virtual Fibre Channel client adapter properties

Record the virtual Fibre Channel client adapter’s slot number and WWPN pair
for use when configuring the storage subsystem in step 4.
3. Save the changes made to the mobile partition to new profile name to
preserve the generated WWPNs for future use by the mobile partition.

Important: Similar to virtual SCSI, you do not have to create virtual Fibre
Channel server adapters for your mobile partition on the destination Virtual
I/O Server. They are created automatically for you during the migration.

4. By using standard SAN configuration techniques, assign the mobile partition’s


storage to the virtual Fibre Channel adapters that use the WWPN pair
generated in step 2 on page 199, and properly zone the virtual Fibre Channel
WWPNs with the storage subsystem’s WWPN.
5. On the source Virtual I/O Server, run the cfgdev command to discover the
newly added virtual Fibre Channel server adapter (vfchost0). The lsdev
command shows the changes as seen in Example 5-9.

Example 5-9 Show the virtual Fibre Channel server adapter


$ lsdev -dev vfchost*
name status description
vfchost0 Available Virtual FC Server Adapter

6. Execute the vfcmap command to associate the virtual Fibre Channel server
adapter to the physical Fibre Channel adapter.

200 IBM PowerVM Live Partition Mobility


As shown in Example 5-10, the adapter port in use on the NPIV Fibre
Channel adapter is fcs1.

Example 5-10 Virtual Fibre Channel mappings created and listed


$ vfcmap -vadapter vfchost0 -fcp fcs1
vfchost0 changed
$ lsmap -all -npiv
Name Physloc ClntID ClntName ClntOS
============= ================================== ====== ============== =======
vfchost0 U9117.MMA.100F6A0-V1-C70 2 mobile2 AIX

Status:LOGGED_IN
FC name:fcs1 FC loc code:U789D.001.DQDWWHY-P1-C1-T2
Ports logged in:2
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:fcs2 VFC client DRC:U9117.MMA.100F6A0-V2-C70-T1

7. Record the existing physical Fibre Channel adapter and disk configuration.
Use these details when you remove the physical adapter from the partition.
8. Run the cfgmgr command on the mobile partition to configure the new virtual
Fibre Channel client adapter. The lsdev command shows the new adapter as
fcs2 in Example 5-11. Our physical adapter is a dual-port adapter listed as
fcs0 and fcs1. The mobile partition’s LUNs are attached using the fcs1 port.

Example 5-11 Fibre Channel device listing on the mobile partition


# lsdev|egrep 'fcs*|fscs*'
fcs0 Available 00-00 4Gb FC PCI Express Adapter
fcs1 Available 00-01 4Gb FC PCI Express Adapter
fcs2 Available 70-T1 Virtual Fibre Channel Client
fscsi0 Available 00-00-01 FC SCSI I/O Controller Protocol
fscsi1 Available 00-01-01 FC SCSI I/O Controller Protocol
fscsi2 Available 70-T1-01 FC SCSI I/O Controller Protocol

9. Verify that the partition’s disks are enabled on the new virtual Fibre Channel
adapters by using the lspath command as shown in Example 5-12. Because
our storage subsystem uses active and passive controller paths, two paths
are shown for each disk. Other storage subsystems might use different
commands to list available paths and show different output.

Example 5-12 lspath output from the mobile partition


# lspath
Enabled hdisk0 fscsi1
Enabled hdisk0 fscsi1
Enabled hdisk0 fscsi2
Enabled hdisk0 fscsi2

Chapter 5. Advanced topics 201


Figure 5-46 shows the mobile partition using a virtual and physical path to disk.

Source System Destination System

Mobile Partition

hdisk0

fcs1 fcs2 ent0

Hypervisor Hypervisor

vfchost0 ent1 ent1

Destination VIOS
Source VIOS

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

Storage
Subsystem

Figure 5-46 The mobile partition using physical and virtual resources

Remove physical Fibre Channel adapters


To remove the physical Fibre Channel adapter from the mobile partition:
1. Remove all physical devices, with their children, by using the rmdev command.
See 5.6.6, “Remove adapters from the mobile partition” on page 160 for
details about removing required and desired adapters. Use the device names
of the physical adapters recorded in step 7 on page 201. For example, if the
only physical devices in use are the physical Fibre Channel adapters, run the
commands shown in Example 5-14 on page 203 to remove the physical
devices.

Example 5-13 Removing the physical adapters and their child devices
# rmdev -R -dl fcs0
# rmdev -R -dl fcs1
# lsdev -Cc adapter|grep fcs
fcs2 Available 70-T1 Virtual Fibre Channel Client Adapter

202 IBM PowerVM Live Partition Mobility


Verify that you are using only the virtual Fibre Channel path to the disk as
displayed in Example 5-14.

Example 5-14 Remaining paths after physical adapter has been removed
# lspath
Enabled hdisk0 fscsi2

2. Use your HMC to remove all physical adapter slots from the mobile partition
that is using dynamic LPAR.
3. Remove all virtual serial adapters from slots 2 and above from the mobile
partition using dynamic LPAR. Figure 5-47 shows the mobile partition using
only virtual resources.

Source System Destination System

Mobile Partition

hdisk0

fcs1 ent0

Hypervisor Hypervisor

ent1 ent1

Destination VIOS
Source VIOS

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

Storage
Subsystem

Figure 5-47 The mobile partition using virtual resources

Chapter 5. Advanced topics 203


Ready to migrate
The mobile partition is now ready to be migrated. Close any virtual terminals on
the mobile partition, because they will lose connection when the partition
migrates to the destination system. Virtual terminals can be reopened when the
partition is on the destination system.

After the migration is complete, consider adding physical resources back to the
mobile partition, if they are available on the destination system.

Note: The active mobile partition profile is created on the destination system
without any references to any physical I/O slots that were present in your
profile on the source system. Any other mobile partition profiles are copied
unchanged.

Figure 5-48 shows the mobile partition migrated to the destination system.

Source System Destination System

Mobile Partition

hdisk0

fcs0 ent0

Hypervisor Hypervisor

ent1 vfchost0 ent1


Destination VIOS
Source VIOS

Storage Ethernet Storage Ethernet


adapter adapter adapter adapter

Storage
Subsystem

Figure 5-48 The mobile partition on the destination system

204 IBM PowerVM Live Partition Mobility


5.12 Processor compatibility modes
Processor compatibility modes enable you to move logical partitions between
servers that have different processor types, without upgrading the operating
environments installed in the logical partitions.

You can run several versions of AIX, Linux, and Virtual I/O Server in logical
partitions on POWER5 technology-based servers, POWER6 technology-based
servers. Certain older versions of these operating environments do not support
the capabilities that are available with new processors, limiting your flexibility to
move logical partitions between servers that have different processor types.

A processor compatibility mode is a value assigned to a logical partition by the


hypervisor that specifies the processor environment on which the logical partition
can successfully operate. When you move a logical partition to a destination
server that has a different processor type from the source server, the processor
compatibility mode enables that logical partition to run in a processor
environment on the destination server in which it can successfully operate. In
other words, the processor compatibility mode enables the destination server to
provide the logical partition with a subset of processor capabilities that are
supported by the operating environment that is installed in the logical partition.

The processor compatibility mode in which the logical partition currently operates
is the current processor compatibility mode of the logical partition. The
hypervisor sets the current processor compatibility mode for a logical partition by
using the following information:
򐂰 Processor features supported by the operating environment running in the
logical partition
򐂰 Preferred processor compatibility mode that you specify

The preferred processor compatibility mode of a logical partition is the mode in


which you want the logical partition to operate. When you activate the logical
partition, the hypervisor checks the preferred processor compatibility mode and
determines whether the operating environment supports that mode. If the
operating environment supports the preferred processor compatibility mode
(which is the highest mode that the hypervisor can assign to a logical partition),
the hypervisor assigns the preferred processor compatibility mode to the logical
partition. If the operating environment does not support the preferred processor
compatibility mode, the hypervisor assigns to the logical partition the most fully
featured processor compatibility mode (which is a lower mode than the preferred
mode) that is supported by the operating environment.

If you want a logical partition to run in an enhanced mode, you must specify the
enhanced mode as the preferred mode for the logical partition. If the operating

Chapter 5. Advanced topics 205


environment supports the corresponding non-enhanced mode, then the
hypervisor assigns the enhanced mode to the logical partition when you activate
the logical partition.

Logical partitions in the POWER6 enhanced processor compatibility mode can


only run on POWER6 technology-based servers.

You cannot dynamically change the current processor compatibility of a logical


partition. To change the current processor compatibility mode, you must change
the preferred processor compatibility mode, shut down the logical partition, and
restart the logical partition. The hypervisor attempts to set the current processor
compatibility mode to the preferred mode that you specified.

A POWER6 processor cannot emulate all features of a POWER5 processor. For


example, certain types of performance monitoring might not be available for a
logical partition if the current processor compatibility mode of a logical partition is
set to the POWER5 mode.

When you move an active logical partition between servers that have different
processor types, both the current and preferred processor compatibility modes of
the logical partition must be supported by the destination server. When you move
an inactive logical partition between servers that have different processor types,
only the preferred mode of the logical partition must be supported by the
destination server. Table 5-2 lists current and preferred processor compatibility
modes supported on each server type.

Table 5-2 Processor compatibility modes supported by server type


Server processor type Supported current Supported preferred
modes modes

Refreshed POWER6 POWER5, POWER6, default, POWER6,


technology-based server POWER6+, POWER6+ POWER6+, POWER6+
(POWER6+™) enhanced enhanced

POWER6 POWER5, POWER6, default, POWER6,


technology-based server POWER6 enhanced POWER6 enhanced

POWER7 POWER5, POWER6, default, POWER6,


technology-based server POWER6+, POWER7 POWER6+, POWER7

For example, you want to move an active logical partition from a POWER6
technology-based server to a Refreshed POWER6 technology-based server so
that the logical partition can take advantage of the additional capabilities
available with the Refreshed POWER6 processor. You set the preferred
processor compatibility mode to the default mode and when you activate the
logical partition on the POWER6 technology-based server, it runs in the

206 IBM PowerVM Live Partition Mobility


POWER6 mode. When you move the logical partition to the Refreshed POWER6
technology-based server, both the current and preferred modes remain
unchanged for the logical partition until you restart the logical partition. When you
restart the logical partition on the Refreshed POWER6 technology-based server,
the hypervisor evaluates the configuration. Because the preferred processor
compatibility mode is set to the default mode and the logical partition now runs
on a Refreshed POWER6 technology-based server, the highest mode available
is the POWER6+ mode and the hypervisor changes the current processor
compatibility mode to the POWER6+ mode.

When you want to move the logical partition back to the POWER6
technology-based server, you must change the preferred mode from the default
mode to the POWER6 mode (because the POWER6+ mode is not supported on
a POWER6 technology-based server) and restart the logical partition on the
Refreshed POWER6 technology-based server. When you restart the logical
partition, the hypervisor evaluates the configuration. Because the preferred mode
is set to POWER6, the hypervisor does not set the current mode to a higher
mode than POWER6. Remember, the hypervisor first determines whether it can
set the current mode to the preferred mode. If not, it determines whether it can
set the current mode to the next highest mode, and so on. In this case, the
operating environment supports the POWER6 mode, so the hypervisor sets the
current mode to the POWER6 mode, so that you can move the logical partition
back to the POWER6 technology-based server.

The easiest way to maintain this moving back and forth type of flexibility between
different types of processors is to determine the processor compatibility mode
supported on both the source and destination servers and set the preferred
processor compatibility mode of the logical partition to the highest mode
supported by both servers. In this example, you set the preferred processor
compatibility mode to the POWER6 mode, which is the highest mode supported
by both POWER6 technology-based servers and Refreshed POWER6
technology-based servers.

The same logic from the previous examples applies to inactive migrations, except
inactive migrations do not require the current processor compatibility mode of the
logical partition because the logical partition is inactive. After you move an
inactive logical partition to the destination server and activate that logical partition
on the destination server, the hypervisor evaluates the configuration and sets the
current mode for the logical partition just like it does when you restart a logical
partition after an active migration. The hypervisor attempts to set the current
mode to the preferred mode. If it cannot, it checks the next highest mode and so
on. If you specify the default mode as the preferred mode for an inactive logical
partition, you can move that inactive logical partition to a server of any processor
type. Remember, when you move an inactive logical partition between servers
with different processor types, only the preferred mode of the logical partition

Chapter 5. Advanced topics 207


must be supported by the destination server. And because all servers support
the default processor compatibility mode, you can move an inactive logical
partition with the preferred mode of default to a server with any processor type.
When the inactive logical partition is activated on the destination server, the
preferred mode remains set to default, and the hypervisor determines the current
mode for the logical partition.

5.12.1 Verifying the processor compatibility mode of mobile partition


Determine whether the processor compatibility mode of the mobile partition is
supported on the destination server, and update the mode, if necessary, so that
you can successfully move the mobile partition to the destination server.

Note: The verification of the processor compatibility mode of the mobile


partition using the Integrated Virtualization Manager is discussed in
Chapter 7, “Integrated Virtualization Manager for Live Partition Mobility” on
page 221.

To verify that the processor compatibility mode of the mobile partition is


supported on the destination server by using the HMC:
1. Identify the processor compatibility modes that are supported by the
destination server by entering the following command using the HMC
command-line interface (CLI) that manages the destination server:
lssyscfg -r sys -F lpar_proc_compat_modes
Record these values so that you can refer to them later.
2. Identify the preferred processor compatibility mode of the mobile partition:
a. In the navigation area of the HMC that manages the source server, expand
Systems Management  Servers and select the source server.
b. In the contents area, select the mobile partition.
c. From the Tasks menu, select Configuration  Manage Profiles. The
Managed Profiles window opens.
d. Select the active partition profile of the mobile partition or select the
partition profile from which the mobile partition was last activated.
e. From the Actions menu, select Edit. The Logical Partition Profile
Properties window is displayed.
f. Click the Processors tab to view the preferred processor compatibility
mode. Record this value so that you can refer to it later.

208 IBM PowerVM Live Partition Mobility


The result of these steps is shown in Figure 5-49.

Figure 5-49 Processor compatibility mode options of the mobile partition

3. If you plan to perform an inactive migration, skip this step and go to step 4 on
page 210.
If you plan to perform an active migration, identify the current processor
compatibility mode of the mobile partition, as follows:
a. In the navigation area of the HMC that manages the source server, expand
Systems Management  Servers and select the source server.
b. In the contents area, select the mobile partition and click Properties.
c. Select the Hardware tab and view the Processor Compatibility Mode,
which is the current processor compatibility mode of the mobile partition.
Record this value so that you can refer to it later.

Chapter 5. Advanced topics 209


The result of these steps is shown in Figure 5-50.

Figure 5-50 Current processor compatibility mode of the mobile partition

4. Verify that the preferred and current processor compatibility modes that you
identified in steps 2 on page 208 page and on page 209 are in the list of
supported processor compatibility modes identified in step 1 on page 208 for
the destination server. For active migrations, both the preferred and current
processor compatibility modes of the mobile partition must be supported by
the destination server. For inactive migrations, only the preferred processor
compatibility mode must be supported by the destination server.

Attention: If the current processor compatibility mode of the mobile


partition is the POWER5 mode, be aware that the POWER5 mode does
not appear in the list of modes supported by the destination server.
However, the destination server supports the POWER5 mode even though
it does not appear in the list of supported modes.

5. If the preferred processor compatibility mode of the mobile partition is not


supported by the destination server, use step 2 on page 208 to change the
preferred mode to a mode that is supported by the destination server. For
example, the preferred mode of the mobile partition is the POWER6+ mode
and you plan to move the mobile partition to a POWER6 technology-based
server. Although POWER6 technology-based server does not support the
POWER6+ mode, it does support the POWER6 mode. Therefore, you change
the preferred mode to the POWER6 mode.

210 IBM PowerVM Live Partition Mobility


6. If the current processor compatibility mode of the mobile partition is not
supported by the destination server, try the following solutions:
a. If the mobile partition is active, a possibility is that the hypervisor has not
had the opportunity to update the current mode of the mobile partition.
Shut down and reactivate the mobile partition so that the hypervisor can
evaluate the configuration and update the current mode of the mobile
partition.
b. If the current mode of the mobile partition still does not match the list of
supported modes that you identified for the destination server, use step 2
on page 208 to change the preferred mode of the mobile partition to a
mode that is supported by the destination server.
Then, reactivate the mobile partition so that the hypervisor can evaluate
the configuration and update the current mode of the mobile partition.

Chapter 5. Advanced topics 211


212 IBM PowerVM Live Partition Mobility
6

Chapter 6. Migration status


This chapter discusses topics related to migration status and recovery
procedures to be followed when errors occur during migration of a logical
partition. The chapter assumes you have a working knowledge of Live Partition
Mobility prerequisites and actions.

This chapter contains the following topics:


򐂰 6.1, “Progress and reference code location” on page 214
򐂰 6.2, “Recovery” on page 216
򐂰 6.3, “A recovery example” on page 218

© Copyright IBM Corp. 2007, 2009. All rights reserved. 213


6.1 Progress and reference code location
Live Partition Mobility is driven by the HMC. The HMC has knowledge of the
status of all partition migrations and provides the latest reference code for each
logical partition.

To view the migration status on the GUI, expand Systems Management 


Servers, and select the managed system. A system status and reference code is
provided for each logical partition. For example, in Figure 6-1 the QA and mobile
partitions are undergoing an active migration, and for both partitions the latest
reference code is displayed.

Figure 6-1 Partition reference codes

The same information can be obtained from the HMC’s CLI by using the
lsrefcode and lslparmigr commands. See 5.7, “The command-line interface”
on page 162 for details.

Reference codes describe the progress of the migration. You can find a
description of reference codes in “SRCs, current state” on page 260. When the
reference code represents an error, a migration recovery procedure might be
required.

214 IBM PowerVM Live Partition Mobility


After a migration is issued on the HMC GUI, a progress window is provided
similar to the one shown in Figure 6-2. The percentage indicates the completion
of memory state transfer during an active migration. In the case of an inactive
migration, there is no memory management and the value is zero.

Figure 6-2 Migration progress window

During an inactive migration, only the HMC is involved, and it holds all migration
information.

An active migration requires the coordination of the mobile partition and the two
Virtual I/O Servers that have been selected as mover service partitions. All these
objects record migration events in their error logs. You can find a description of
partition-related error logs in “Operating system error logs” on page 266.

The mobile partition records the start and the end of the migration process. You
may extract the data by using the errpt command, as shown in Example 6-1.

Example 6-1 Migration log on mobile partition


[mobile:/]# errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
A5E6DB96 1118164408 I S pmig Client Partition Migration Completed
08917DC6 1118164408 I S pmig Client Partition Migration Started
...

Migration information is recorded also on the Virtual I/O Servers that acted as a
mover service partition. To retrieve it, use the errlog command.

Chapter 6. Migration status 215


Example 6-2 shows the data available on the source mover service partition. The
first event in the log states when the mobile partition execution has been
suspended on the source system and has been activated on the destination
system, while the second records the successful end of the migration.

Example 6-2 Migration log on source mover service partition


$ errlog
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
3EB09F5A 1118164408 I S Migration Migration completed successfully
6CB10B8D 1118164408 I S unspecified Client partition suspend issued
...

On the destination mover service partition, the error log registers only the end of
the migration, as shown in Example 6-3.

Example 6-3 Migration log on destination mover service partition


$ errlog
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
3EB09F5A 1118164408 I S Migration Migration completed successfully
...

The error logs on the mobile partition and the Virtual I/O Servers also record
events that prevent the migration from succeeding, such as user interruption or
network problems. They can be used to trace all migration events on the system.

6.2 Recovery
Live Partition Mobility is designed to verify whether a requested migration can be
executed and to monitor all migration processes. If a running migration cannot be
completed, a rollback procedure is executed to undo all configuration changes
applied.

A partition migration might be prevented from running for two main reasons:
򐂰 The migration is not valid and does not meet prerequisites.
򐂰 An external event prevents a migration component from completing its job.

The migration validation described in 4.4.1, “Performing the validation steps and
eliminating errors” on page 99 takes care of checking all prerequisites. It can be
explicitly executed at any moment and it does not affect the mobile partition.

Perform a validation before requesting any migration. The migration process,


however, performs another validation before starting any configuration changes.

216 IBM PowerVM Live Partition Mobility


After the inactive or active migration begins, the HMC manages the configuration
changes and monitors the status of all involved components. If any error occurs,
recovery actions automatically begin.

When the HMC cannot perform a recovery, administrator intervention is required


to perform problem determination and issue final recovery steps. This situation
might occur when the HMC cannot contact a migration component (for example,
the mobile partition, a Virtual I/O Server, or a system service processor) because
of a network problem or an operator error. After a timeout, an error message is
provided (requesting a recovery).

When a recovery is required, the mobile partition name can appear on both the
source and the destination system. The partition is either powered down (inactive
migration) or really working only on one of the two systems (active migration).
Configuration cleanup is made during recovery.

Although a mobile partition requires a recovery, its configuration cannot be


changed to prevent any attempt to modify its state before its state is returned to
normal operation. Activating the same partition on two systems is not possible.

Recovery is performed by selecting the migrating partition and then selecting


Operations  Mobility  Recover, as shown in Figure 6-3.

Figure 6-3 Recovery menu

Chapter 6. Migration status 217


A pop-up window opens, similar to the one shown in Figure 6-4, requesting
recovery confirmation. Click Recover to start a recovery.

Note: Use the Force recover check box only when:


򐂰 The HMC cannot contact one of the migration components that require a
new configuration or if the migration has been started by another HMC.
򐂰 A normal recovery does not succeed.

Figure 6-4 Recovery pop-up window

The same actions performed on the GUI can be executed with the migrlpar
command on the HMC’s command line. See 5.7, “The command-line interface”
on page 162 for details.

After a successful recovery, the partition returns to normal operation state and
changes to its configuration are then allowed. If the migration is executed again,
the validation phase will detect the component that prevented the migration and
will select alternate elements or provide a validation error.

6.3 A recovery example


As an example, we have deliberately created a network outage during an active
partition migration.

During an active migration, there is a partition state transfer through the network
between the source and destination mover service partitions. The mobile
partition continues running on the source system while its state is copied on the

218 IBM PowerVM Live Partition Mobility


destination system. Then, it is briefly suspended on the source and immediately
reactivated on the destination.

We unplugged the network connection of one mover service partition in the


middle of a state transfer. We had to perform several tests in order to create this
scenario because the migration on the partition (2 GB of memory) was extremely
fast.

In the HMC GUI, the migration process fails and an error message is displayed.

Because the migration stopped in the middle of the state transfer, the partition
configuration on the two involved systems is kept in the migrating status, waiting
for the administrator to identify the problem and decide how to continue.

In the HMC, the status of the migrating partition, mobile, is present in both
systems, while it is active only on the source system. On the destination system,
only the shell of the partition is present. The situation can viewed by expanding
Systems Management  Custom Groups  All partitions. In the content
area, a situation similar to Figure 6-5 is shown.

Figure 6-5 Interrupted active migration status

Chapter 6. Migration status 219


The applications running on the partition have not been affected by the network
outage and are running on the source system. The only visible effect is on the
partition’s error log that shows the start and the abort of the migration, as
described in Example 6-4. No action is required on the partition.

Example 6-4 Migrating partition’s error log after aborted migration


[mobile]# errpt
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
5E075ADF 1118180308 I S pmig Client Partition Migration Aborted
08917DC6 1118180208 I S pmig Client Partition Migration Started

Both Virtual I/O Servers, using a single mover service partition, have recorded
the event in their error logs. On the Virtual I/O Server, where the cable was
unplugged, we see both the physical network error and the mover service
partition communication error, as indicated in Example 6-5.

Example 6-5 Mover service partition with network outage


$ errlog
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
427E17BD 1118181908 P S Migration Migration aborted: MSP-MSP connection do
0B41DD00 1118181708 I H ent4 ADAPTER FAILURE
...

The other Virtual I/O Server only shows the communication error of the mover
service partition, because no physical error has been created, as indicated in
Example 6-6.

Example 6-6 Mover service partition with communication error


$ errlog
IDENTIFIER TIMESTAMP T C RESOURCE_NAME DESCRIPTION
427E17BD 1118182108 P S Migration Migration aborted: MSP-MSP connection do
...

To recover from an interrupted migration, you must select the mobile partition and
select Operations  Mobility  Recover, as shown in Figure 6-3 on page 217.

A pop-up window similar to the one shown in Figure 6-4 on page 218 opens.
Click the Recover button and the partition state is cleaned up (normalized). The
mobile partition is present only on the source system where it is running, and it is
removed on the destination system, where it has never been executed.

After the network outage is resolved, the migration can be issued again. Wait for
the RMC protocol to reset communication between the HMC and the Virtual I/O
Server that had the network cable unplugged.

220 IBM PowerVM Live Partition Mobility


7

Chapter 7. Integrated Virtualization


Manager for Live Partition
Mobility
If the Virtual I/O Server is installed on a IBM Power Systems server that is not
managed by a Hardware Management Console or is on an IBM BladeCenter
blade server, the Virtual I/O Server becomes the management partition and
provides the Integrated Virtualization Manager for systems management. The
Integrated Virtualization Manager provides a Web-based and command-line
interface that enables you to migrate a logical partition from one POWER6 or
POWER7 technology-based system to another.

In this chapter, we discuss migration types, requirements and preparation tasks


for Live Partition Mobility with the Integrated Virtualization Manager.

This chapter contains the following topics:


򐂰 7.1, “Migration types” on page 222
򐂰 7.2, “Requirements for Live Partition Mobility on IVM” on page 222
򐂰 7.3, “How active Partition Mobility works” on page 225
򐂰 7.4, “How inactive Partition Mobility works” on page 226
򐂰 7.5, “Validation for active Partition Mobility” on page 227
򐂰 7.6, “Validation for inactive Partition Mobility” on page 231
򐂰 7.7, “Preparation for partition migration” on page 232

© Copyright IBM Corp. 2007, 2009. All rights reserved. 221


7.1 Migration types
Two types of migration are available with the Integrated Virtualization Manager
depending on the state of the logical partition:
򐂰 If the logical partition is in a running state, the migration is active.
򐂰 If the logical partition is in the not activated state, the migration is inactive.

As with Live Partition Mobility conducted by the HMC, before migrating a logical
partition, a validation check should be performed to ensure that the migration will
complete successfully.

The migration task on the local Integrated Virtualization Manager helps you
validate and complete a partition migration to a remote system that is managed
by another Integrated Virtualization Manager.

7.2 Requirements for Live Partition Mobility on IVM


The section lists the requirements to use Live Partition Mobility on an Integrated
Virtualization Manager managed system.

Source and destination system requirements


The source and destination system must be either a POWER7 technology-based
server or blade supporting IVM, one of the following POWER6 technology-based
models, or a combination of both:
򐂰 8203-E4A (IBM Power System 520 Express)
򐂰 8204-E8A (IBM Power System 550 Express)
򐂰 8234–EMA (IBM Power System 560 Express)
򐂰 9407-M15 (IBM Power System 520 Express)
򐂰 9408-M25 (IBM Power System 520 Express)
򐂰 9409-M50 (IBM Power System 550 Express)
򐂰 7998-60X (BladeCenter JS12)
򐂰 7998-61X (BladeCenter JS22)

Both the source and destination systems must be at a firmware level 01Ex320 or
later, where x is an S for BladeCenter or an L for Entry servers (such as the
Power 520, Power 550, and Power 560).

Although there is a minimum required firmware level, each system can have a
different level of firmware. The level of source system firmware must be
compatible with the destination firmware. It is recommended to have the most
current system firmware available installed.

222 IBM PowerVM Live Partition Mobility


Source and destination Virtual I/O Server requirements
The Virtual I/O Server has to be installed at release level 1.5 or higher both on
the source and destination systems. Similar to system firmware, the Virtual I/O
Server version and fix pack level should be at the most current level.

To verify the current code level:


1. From the Service Management menu, click Updates. See Figure 7-1. The
Management Partition Updates panel opens and the code level is shown.
2. Click the link to the Virtual I/O Server support site and newer updates and
fixes, if available:
http://techsupport.services.ibm.com/server/vios/download

Figure 7-1 Checking release level of the Virtual I/O Server

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 223


The ioslevel command can be executed, from the CLI, on the Virtual I/O Server
in order to determine the current version and fix pack level of the Virtual I/O
Server and to see whether an upgrade is necessary. The output of this command
is shown in Example 7-1.

Example 7-1 The output of the ioslevel command


$ ioslevel
2.1.0.1-FP-20.0
$

Note: On servers that are managed by the Integrated Virtualization Manager,


the source and destination Virtual I/O Server logical partitions might also be
referred to as the source and destination management partitions.

Note: IVM has reserved slots. Slots 0-9 are reserved on VIOS. Slots 0-3 on
clients. These slots cannot be part of a migration.

Operating system requirements


The operating system running in the mobile partition has to be AIX or Linux.
A Virtual I/O Server logical partition or an i5/OS® logical partition cannot be
migrated. The operating system must be at one of the following levels:
򐂰 AIX 5L Version 5.3 Technology Level 7 or later
򐂰 AIX Version 6.1 or later
򐂰 Red Hat Enterprise Linux Version 5 (RHEL5) Update 1 or later
򐂰 SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later

Previous versions of AIX and Linux can participate in inactive partition migration
if the operating systems support virtual devices and IBM Power Systems
POWER6 technology-based systems.

Storage requirements
For a list of supported disks and optical devices, see the data sheet available on
the Virtual I/O Server support Web site:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/data
sheet.html

Network requirements
The migrating partition uses the virtual LAN for network access. The VLAN must
be bridged to a physical network using a shared Ethernet adapter in the Virtual

224 IBM PowerVM Live Partition Mobility


I/O Server partition. Your LAN must be configured such that migrating partitions
can continue to communicate with other necessary clients and servers after a
migration is completed.

7.3 How active Partition Mobility works


With active Partition Mobility you can move a running logical partition, including
its operating system and applications, from one server to another without
disrupting the operation of that logical partition. Active Partition Mobility on the
Integrated Virtualization Manager is similar to the active Partition Mobility on the
HMC. However, several differences exist, such as how the migration is initiated
and the virtual adapter mapping choices.

The active migration process involves the following steps:


1. You ensure that all requirements are satisfied and all preparation tasks are
completed.
2. You use the migration task on the Integrated Virtualization Manager to initiate
the active Partition Mobility.
3. The Integrated Virtualization Manager extracts the physical device description
for each physical adapter on the Virtual I/O Server logical partition on the
source server. The Integrated Virtualization Manager uses the extracted
information to determine whether the Virtual I/O Server logical partitions on
the destination server can provide the mobile partition with the same virtual
SCSI, virtual Ethernet, and virtual Fibre Channel configuration that exists on
the source server. This process includes verifying that the Virtual I/O Server
logical partitions on the destination server have enough available slots to
accommodate the virtual adapter configuration of the mobile partition. The
Integrated Virtualization Manager uses all of this information to generate a list
of recommended virtual adapter mappings for the mobile partition on the
destination server.
4. The Integrated Virtualization Manager prepares the source and destination
environments for partition migration. This includes using the virtual adapter
mappings from the previous step to map the virtual adapters on the mobile
partition to the virtual adapters on the Virtual I/O Server logical partition on
the destination server.
5. The Integrated Virtualization Manager transfers the logical partition state from
the source environment to the destination environment, as follows:
a. The source mover service partition extracts the logical partition state
information from the source server and sends it to the destination mover
service partition over the network.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 225


b. The destination mover service partition receives the logical partition state
information and installs it on the destination server.
6. The Integrated Virtualization Manager suspends the mobile partition on the
source server. The source mover service partition continues to transfer the
logical partition state information to the destination mover service partition.
7. The hypervisor resumes the mobile partition on the destination server.
8. The Integrated Virtualization Manager completes the migration. All resources
that were consumed by the mobile partition on the source server are
reclaimed by the source server. The Integrated Virtualization Manager
removes the virtual SCSI adapters and the virtual Fibre Channel adapters
(that were connected to the mobile partition) from the source Virtual I/O
Server logical partitions.
9. You perform post-requisite tasks, such as adding dedicated I/O adapters to
the mobile partition or adding the mobile partition to a partition workload
group.

7.4 How inactive Partition Mobility works


With inactive partition migration, you can move a logical partition that is powered
off from one server to another. Inactive Partition Mobility on the Integrated
Virtualization Manager is similar to the inactive Partition Mobility on the HMC,
however the same differences as with active Partition Mobility apply.

The inactive migration process involves the following steps:


1. You ensure that all requirements are satisfied and all preparation tasks are
completed.
2. You shut down the mobile partition.
3. You use the migration task on the Integrated Virtualization Manager to initiate
inactive Partition Mobility.
4. The Integrated Virtualization Manager extracts the physical device description
for each physical adapter on the Virtual I/O Server logical partition on the
source server. The Integrated Virtualization Manager uses the extracted
information to determine whether the Virtual I/O Server logical partitions on
the destination server can provide the mobile partition with the same virtual
SCSI, virtual Ethernet, and virtual Fibre Channel configuration that exists on
the source server. This includes verifying that the Virtual I/O Server logical
partitions on the destination server have enough available slots to
accommodate the virtual adapter configuration of the mobile partition. The
Integrated Virtualization Manager uses all of this information to generate a list

226 IBM PowerVM Live Partition Mobility


of recommended virtual adapter mappings for the mobile partition on the
destination server.
5. The Integrated Virtualization Manager prepares the source and destination
environments for Partition Mobility.
6. The Integrated Virtualization Manager transfers the partition state from the
source environment to the destination environment.
7. The Integrated Virtualization Manager completes the migration. All resources
that were consumed by the mobile partition on the source server are
reclaimed by the source server. The Integrated Virtualization Manager
removes the virtual SCSI adapters and the virtual Fibre Channel adapters
(that were connected to the mobile partition) from the source Virtual I/O
Server logical partitions.
8. You activate the mobile partition on the destination server.
9. You perform post-requisite tasks, such as establishing virtual terminal
connections or adding the mobile partition to a partition workload group.

7.5 Validation for active Partition Mobility


Before an active logical partition is migrated, you have to validate your
environment. You can use the validation function on the Integrated Virtualization
Manager to validate your system configuration. If the Integrated Virtualization
Manager detects a configuration or connection problem, it displays an error
message with information to help you resolve the problem.

The validation function on the Integrated Virtualization Manager checks the


following items:
򐂰 The source and destination servers, POWER Hypervisor, Virtual I/O Servers,
and mover service partitions for active partition migration capability and
compatibility
򐂰 That the Resource Monitoring and Control (RMC) connections to the mobile
partition, the source and destination Virtual I/O Servers, and the connection
between the source and destination mover service partitions are established.
You may start this check manually with the rmcctrl command
򐂰 That no physical adapters are in the mobile partition and that no virtual serial
adapters are in virtual slots higher than 1
򐂰 That no client virtual SCSI disks on the mobile partition are backed by logical
volumes and that no disks map to internal disks
򐂰 The mobile partition, its operating system, and its applications for active
migration capability. AIX passes the check migration request to those

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 227


applications and kernel extensions that have registered to be notified of
dynamic reconfiguration events. The operating system either accepts or
rejects the migration
򐂰 That the logical memory block size is the same on the source and destination
servers
򐂰 That the operating system on the mobile partition is AIX or Linux
򐂰 That the mobile partition is not the redundant error path reporting logical
partition
򐂰 That the mobile partition is not configured with barrier synchronization
registers (BSR)
򐂰 That the mobile partition is not configured with huge pages
򐂰 That the mobile partition does not have a Host Ethernet Adapter (or
Integrated Virtual Ethernet)
򐂰 That the mobile partition state is Active or Running
򐂰 That the mobile partition is not in a partition workload group
򐂰 The uniqueness of the mobile partition’s virtual MAC addresses
򐂰 That the required Virtual LAN IDs are available on the destination Virtual I/O
Server
򐂰 That the mobile partition’s name is not already in use on the destination
server
򐂰 The number of current active migrations against the number of supported
active migrations
򐂰 That the necessary resources (processors and memory) are available to
create a shell logical partition on the destination system. During validation,
the Integrated Virtualization Manager extracts the device description for each
virtual adapter on the Virtual I/O Server logical partition on the source server.
The Integrated Virtualization Manager uses the extracted information to
determine whether the Virtual I/O Server logical partitions on the destination
server can provide the mobile partition with the same virtual SCSI, virtual
Ethernet, and virtual Fibre Channel configuration that exists on the source
server. This includes verifying that the Virtual I/O Server logical partitions on
the destination server have enough available slots to accommodate the virtual
adapter configuration of the mobile partition.

To initiate the validation through the Integrated Virtualization Manager:


1. In Partition Management, select View/Modify Partitions.
2. Select the mobile partition in the Partition Details section and select the More
Tasks menu.

228 IBM PowerVM Live Partition Mobility


3. Select Migrate.
The result is shown in Figure 7-2.

Figure 7-2 More Tasks menu

The Migrate Partition panel opens.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 229


Figure 7-3 shows the Migrate Partition panel.
4. Ensure that the Remote IVM address, Remote IVM user ID and Remote IVM
password are filled in to perform the validation before the actual migration.
5. Click Validate.

Figure 7-3 Validation task for migration

Note: Figure 7-3 gives you the impression that you might migrate from an
IVM managed system to a remote IVM or HMC managed system. However
at the time of this publication migration between IVM and HMC managed
systems is not supported.

230 IBM PowerVM Live Partition Mobility


7.6 Validation for inactive Partition Mobility
Before an inactive logical partition is migrated, you have to validate your
environment. You may use the validation function on the Integrated Virtualization
Manager to validate your system configuration. If the Integrated Virtualization
Manager detects a configuration or connection problem, it displays an error
message with information to help you resolve the problem.

The validation function on the Integrated Virtualization Manager checks the


following items:
򐂰 The Virtual I/O Server and POWER Hypervisor migration capability and
compatibility on the source and destination
򐂰 The Resource Monitoring and Control (RMC) connections to the source and
destination Virtual I/O Servers
򐂰 That the mobile partition name is not already in use at the destination server
򐂰 The uniqueness of virtual Media Access Control (MAC) address
򐂰 That the required Virtual LAN IDs are available on the destination Virtual I/O
Server
򐂰 That the mobile partition is in the Not Activated state
򐂰 That the mobile partition is an AIX or a Linux logical partition
򐂰 That the mobile partition is not the redundant error path reporting logical
partition or a service logical partition
򐂰 That the mobile partition is not a member of a partition workload group
򐂰 The number of current inactive migrations against the number of supported
inactive migrations
򐂰 That all required I/O devices are connected to the mobile partition through a
Virtual I/O Server, that is, there are no physical adapters
򐂰 That the virtual SCSI disks assigned to the logical partition are accessible by
the Virtual I/O Servers on the destination server
򐂰 That no virtual SCSI disks are backed by logical volumes and that no virtual
SCSI disks are attached to internal disks (not on the SAN)
򐂰 That the necessary resources (processors and memory) are available to
create a shell logical partition on the destination system. During validation,
the Integrated Virtualization Manager extracts the device description for each
virtual adapter on the Virtual I/O Server logical partition on the source server.
The Integrated Virtualization Manager uses the extracted information to
determine whether the Virtual I/O Server logical partitions on the destination
server can provide the mobile partition with the same virtual SCSI, virtual

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 231


Ethernet, and virtual Fibre Channel configuration that exists on the source
server. This includes verifying that the Virtual I/O Server logical partitions on
the destination server have enough available slots to accommodate the virtual
adapter configuration of the mobile partition.

7.7 Preparation for partition migration


This section describes how to prepare source and destination servers,
management and mobile partitions, and the configurations for virtual SCSI and
virtual Fibre Channel. It also discusses validating the environment and migrating
the mobile partition.

7.7.1 Preparing the source and destination servers


To prepare the source and destination server for Partition Mobility using the
Integrated Virtualization Manager:
1. Ensure that the source and destination servers are either a POWER7
technology-based server or blade supporting IVM, or one of the following
POWER6 models:
– 8203-E4A (IBM Power System 520 Express)
– 8204-E8A (IBM Power System 550 Express)
– 8234–EMA (IBM Power System 560 Express)
– 9407-M15 (IBM Power System 520 Express)
– 9408-M25 (IBM Power System 520 Express)
– 9409-M50 (IBM Power System 550 Express)
– BladeCenter JS12
– BladeCenter JS22
2. Ensure that the logical memory block (LMB) size is the same on the source
and destination server by determining the logical memory block size of each
server, and then updating the sizes if necessary:
a. From the navigation area, select View/Modify System Properties under
Partition Management. The View/Modify System Properties panel opens.
b. Select the Memory tab to view and to modify the memory usage
information for the managed system.

232 IBM PowerVM Live Partition Mobility


The result of these steps is shown in Figure 7-4.

Figure 7-4 Checking LMB size with the IVM

3. Ensure that the destination server has enough available memory to support
the mobile partition:
a. Determine the amount of memory that the mobile partition requires:
i. From the Partition Management menu, click View/Modify Partitions.
The View/Modify Partitions panel opens.
ii. Select the mobile partition.
iii. From the More Tasks menu, select Properties. A new window named
Partition Properties opens.
iv. Click the Memory tab.
v. Record the minimum, assigned, and maximum memory settings.
vi. Click OK.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 233


The result of these steps is shown in Figure 7-5.

Figure 7-5 Checking the amount of memory of the mobile partition

b. Determine the amount of memory that is available on the destination


server:
i. From the Partition Management menu, click View/Modify System
Properties. The View/Modify System Properties panel opens.
ii. Click the Memory tab.
iii. From the General tab, record the Current memory available and the
Reserved firmware memory.

234 IBM PowerVM Live Partition Mobility


The result of these steps is shown in Figure 7-6.

Figure 7-6 Checking the amount of memory on the destination server

c. Compare the values from the mobile partition and the destination server.

Notes:
򐂰 Keep in mind that when you move the mobile partition to the destination
server, the destination server requires more reserved firmware memory
to manage the mobile partition. If necessary, you may add more
available memory to the destination server to support the migration by
dynamically removing memory from the other logical partitions.
򐂰 Use any role other than View Only to modify the memory. Users with the
Service Representative (SR) role cannot view or modify storage values.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 235


4. Ensure that the destination server has enough available processors to
support the mobile partition:
a. Determine how many processors the mobile partition requires:
i. From the Partition Management menu, click View/Modify Partitions.
The View/Modify Partitions panel opens.
ii. Select the logical partition for which you want to view the properties.
iii. From the More Tasks menu, select Properties. A new window named
Partition Properties opens.
iv. Click the Processing tab and record the minimum, maximum, and
available processing units settings.
v. Click OK.
The result of these steps is shown in Figure 7-7.

Figure 7-7 Checking the amount of processing units of the mobile partition

b. Determine the processors available on the destination server:


i. From the Partition Management menu, click View/Modify System
Properties. The View/Modify System Properties panel opens.
ii. Select the Processing tab.
iii. Record the Current processing units available.

236 IBM PowerVM Live Partition Mobility


The result of these steps is shown in Figure 7-8.

Figure 7-8 Checking the amount of processing units on the destination server

c. Compare the values from the mobile partition and the destination server. If
the destination server does not have enough available processors to
support the mobile partition, use the Integrated Virtualization Manager to
dynamically remove the processors from the logical partition or you can
remove processors from logical partitions on the destination server.

Note: You must have a super administrator role to perform this task.

5. Verify that the source and destination Virtual I/O Server can communicate
with each other.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 237


7.7.2 Preparing the management partition for Partition Mobility
To prepare the management partition for Partition Mobility using the Integrated
Virtualization Manager:
1. Ensure that the source and destination servers are using Integrated
Virtualization Manager version 1.5, or later. See Figure 7-1 on page 223
2. Ensure that the PowerVM Enterprise Edition hardware feature is activated.
To view the current enabled features, use the lsvet command. Example 7-2
shows the lsvet command used to verify that Partition Mobility is enabled.

Example 7-2 lsvet command


$ lsvet -t hist
time_stamp=11/11/2008 23:53:10,entry=[VIOSI05000423-0517] PowerVM Enterprise Edition code entered.
time_stamp=11/11/2008 23:53:10,entry=[VIOSI05000403-0332] Virtual I/O server capability enabled.
time_stamp=11/11/2008 23:53:10,entry=[VIOSI05000405-0333] Micro-partitioning capability enabled.
time_stamp=11/11/2008 23:53:10,entry=[VIOSI05000406-0334] Multiple partitions enabled.
time_stamp=11/11/2008 23:53:10,entry=VIOSI0500040B
time_stamp=11/11/2008 23:53:10,entry=[VIOSI0500042A-0341] Inactive partition mobility enabled.
time_stamp=11/11/2008 23:53:10,entry=[VIOSI0500042B-0342] Active partition mobility enabled.

If Partition Mobility is not enabled and the feature was purchased with the
system, obtain the activation code from the IBM Capacity on Demand (CoD)
Web site:
http://www-912.ibm.com/pod/pod
Enter the system type and serial number on the CoD site and click Submit.
A list of available activation codes (such as VET or Virtualization Technology
Code, POD, or CUoD Processor Activation Code) or keys with a type and
description is displayed. If PowerVM Enterprise Edition was not purchased
with the system, it can be upgraded through the Miscellaneous Equipment
Specification (MES) process.

If necessary, enter the activation code in the Integrated Virtualization Manager,


as follows:
1. From the IVM Management menu in the navigation area, click Enter
PowerVM Edition Key.
The Enter PowerVM Edition Key window opens.
2. Enter your activation code for PowerVM Edition and click Apply.

238 IBM PowerVM Live Partition Mobility


Figure 7-9 shows how to enter the key. When PowerVM Enterprise is
enabled, a Mobility section is added to the More Tasks menu in the
View/Modify Partitions view.

Figure 7-9 Enter PowerVM Edition key on the IVM.

7.7.3 Preparing the mobile partition for Partition Mobility


To prepare the mobile partition for Partition Mobility, using the Integrated
Virtualization Manager:
1. Ensure that the operating system is at one of the following levels:
– AIX 5L Version 5.3 with the 5300-07 Technology Level or later
– AIX Version 6.1 or later
– Red Hat Enterprise Linux version 5 Update 1 or later
– SUSE Linux Enterprise Server 10 (SLES 10) Service Pack 1 or later
Earlier versions of AIX and Linux can participate in inactive Partition Mobility if
the operating systems support virtual devices and IBM POWER6 models.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 239


2. Ensure that the source and destination management partitions can
communicate with each other.
3. Verify whether the processor compatibility mode of the mobile partition is
supported on the destination server, and update the mode if necessary, so
that you can successfully move the mobile partition to the destination server.
To verify that the processor compatibility mode of mobile partition is
supported on the destination server using the Integrated Virtualization
Manager:
a. Identify the processor compatibility modes that are supported by the
destination server by entering the following command in the command line
of the Integrated Virtualization Manager on the destination server:
lssyscfg -r sys -F lpar_proc_compat_modes
Record these values so that you can refer to them later.
b. Identify the processor compatibility mode of the mobile partition on the
source server:
i. From the Partition Management menu, click View/Modify Partitions.
The View/Modify Partitions window is displayed.
ii. In the contents area, select the mobile partition.
iii. From the More Tasks menu, select Properties. The Partition
Properties window opens.
iv. Select the Processing tab.
v. View the Current and Preferred processor compatibility mode values
for the mobile partition. Record these values so that you can refer to
them later.

Note: In versions earlier than 2.1 of the Integrated Virtualization


Manager, the Integrated Virtualization Manager displays only the
current processor compatibility mode for the mobile partition.

240 IBM PowerVM Live Partition Mobility


The result of these steps is shown in Figure 7-10.

Figure 7-10 Processor compatibility mode on the IVM

c. Verify that the processor compatibility mode (which you identified in step b
on page 240) is in the list of supported processor compatibility modes
(which you identified in step a on page 240) for the destination server. For
active migrations, both the preferred and current modes of the mobile
partition must be supported by the destination server. For inactive
migrations, only the preferred mode must be supported by the destination
server.

Note: If the current processor compatibility mode of the mobile partition


is the POWER5 mode, be aware that the POWER5 mode does not
appear in the list of modes supported by the destination server.
However, the destination server does support the POWER5 mode even
though it does not appear in the list of supported modes.

d. If the preferred processor compatibility mode of the mobile partition is not


supported by the destination server, use step b on page 240 to change the
preferred mode to a mode that is supported by the destination server.
For example, if the preferred mode of the mobile partition is the POWER6+
mode, and you plan to move the mobile partition to a POWER6
technology-based server. The POWER6 technology-based server does

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 241


not support the POWER6+ mode, but it does support the POWER6 mode.
Therefore, you change the preferred mode to the POWER6 mode.
e. If the current processor compatibility mode of the mobile partition is not
supported by the destination server, try the following solutions:
i. If the mobile partition is active, the hypervisor might not have had the
opportunity to update the current mode of the mobile partition because
the preferred mode was last changed. Restart the mobile partition so
that the hypervisor can evaluate the configuration and update the
current mode of the mobile partition.
ii. If the current mode of the mobile partition still does not appear in the
list of supported modes that you identified for the destination server,
use step b on page 240 to change the preferred mode of the mobile
partition to a mode that is supported by the destination server. Then,
restart the mobile partition so that the hypervisor can evaluate the
configuration and update the current mode of the mobile partition.
For example, if the mobile partition runs on a Refreshed POWER6
processor- based server and its current mode is the POWER6+ mode.
You want to move the mobile partition to a POWER6 technology-based
server, which does not support the POWER6+ mode. You change the
preferred mode of the mobile partition to the POWER6 mode and
restart the mobile partition. The hypervisor evaluates the configuration
and sets the current mode to the POWER6 mode, which is supported
on the destination server.
4. Ensure that the mobile partition is not part of a partition workload group.
A partition workload group identifies a set of logical partitions that are located
on the same physical system. A partition workload group is defined when you
use the Integrated Virtualization Manager to configure a logical partition. The
partition workload group is intended for applications that manage software
groups.
You must remove the mobile partition from a partition workload group by
completing the following steps:
a. From the Partition Management menu, click View/Modify Partitions. The
View/Modify Partitions window opens.
b. Select the logical partition that you want to remove from the partition
workload group.
c. From the More Tasks menu, select Properties. A new window named
Partition Properties opens.
d. In the General tab, deselect the Partition workload group participant box.
e. Click OK.

242 IBM PowerVM Live Partition Mobility


The result of these steps is shown in Figure 7-11.

Figure 7-11 Checking the partition workload group participation

5. Ensure that the mobile partition does not have physical adapters, as follows:
a. From the Partition Management menu, click View/Modify Partitions. The
View/Modify Partitions window opens.
b. Select the logical partition that you want to remove from the partition
workload group.
c. From the More Tasks menu, select Properties. A new window named
Partition Properties appears.
d. In the Physical Adapters tab, verify if there are no physical adapters
configured.
e. Click OK.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 243


The result of these steps is shown in Figure 7-12.

Figure 7-12 Checking if the mobile partition has physical adapters

Note: During inactive migration, the Integrated Virtualization Manager


removes physical I/O adapters that are assigned to the mobile partition.

6. Ensure that the applications running in the mobile partition are mobility-safe
or mobility-aware. Most software applications running in AIX and Linux logical
partitions do not require any changes to work correctly during active Partition
Mobility. Certain applications might have dependencies on characteristics that
change between the source and destination servers and other applications
might have to adjust to support the migration.

7.7.4 Preparing the virtual SCSI configuration for Partition Mobility


The mobile partition moves from one server to another by the source server
sending the logical partition state information to the destination server over a
local area network (LAN). However, partition disk data cannot pass from one
system to another system over a network. Thus, for Partition Mobility to succeed,
the mobile partition must use storage resources virtualized by a storage area
network (SAN) so that it can access the same storage from both the source and
destination servers.

The physical storage that the mobile partition uses is connected to the SAN. At
least one physical adapter that is assigned to the source Virtual I/O Server
logical partition is connected to the SAN, and at least one physical adapter that is
assigned to the destination Virtual I/O Server logical partition is also connected
to the SAN.

244 IBM PowerVM Live Partition Mobility


The physical adapter on the source Virtual I/O Server logical partition connects
to one or more virtual adapters on the source Virtual I/O Server logical partition.
Similarly, the physical adapter on the destination Virtual I/O Server logical
partition connects to one or more virtual adapters on the destination Virtual I/O
Server logical partition. Each virtual adapter on the source Virtual I/O Server
logical partition connects to at least one virtual adapter on a client logical
partition. Similarly, each virtual adapter on the destination Virtual I/O Server
logical partition connects to at least one virtual adapter on a client logical
partition.

When you move the mobile partition to the destination server, the Integrated
Virtualization Manager automatically creates and connects virtual adapters on
the destination server, as follows:
򐂰 Creates virtual adapters on the destination Virtual I/O Server logical partition
򐂰 Creates virtual adapters on the mobile partition
򐂰 Connects the virtual adapters on the destination Virtual I/O Server logical
partition to the virtual adapters on the mobile partition

Note: The Integrated Virtualization Manager automatically adds and removes


virtual SCSI adapters to and from the management partition and the logical
partitions when you create and delete a logical partition.

Verify that the destination server provides the same virtual SCSI configuration as
the source server so that the mobile partition can access its physical storage on
the SAN after it moves to the destination server:
1. Verify that the physical storage that is used by the mobile partition is assigned
to the management partition on the source server and to the management
partition on the destination server.
2. Verify that the reserve_policy attributes on the physical volumes are set to
no_reserve so that the mobile partition can access its physical storage on the
SAN from the destination server.
To set the reserve_policy attribute of the physical storage to no_reserve:
a. From either the Virtual I/O Server logical partition on the source server or
the Virtual I/O Server on the destination server, list the disks to which the
Virtual I/O Server has access. Run the following command:
lsdev -type disk
b. List the attributes of each disk. Run the following command, where hdiskX
is the name of the disk that you identified in the previous step:
lsdev -dev hdiskx -attr

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 245


The output is shown in Example 7-3.

Example 7-3 lsdev command


$ lsdev -dev hdisk6 -attr
attribute value description
user_settable

PCM PCM/friend/otherapdisk Path Control Module False


PR_key_value none Persistant Reserve Key Value True
algorithm fail_over Algorithm True
autorecovery no Path/Ownership Autorecovery True
clr_q no Device CLEARS its Queue on error True
cntl_delay_time 0 Controller Delay Time True
cntl_hcheck_int 0 Controller Health Check Interval True
dist_err_pcnt 0 Distributed Error Percentage True
dist_tw_width 50 Distributed Error Sample Time True
hcheck_cmd inquiry Health Check Command True
hcheck_interval 60 Health Check Interval True
hcheck_mode nonactive Health Check Mode True
location Location Label True
lun_id 0x0 Logical Unit Number ID False
lun_reset_spt yes LUN Reset Supported True
max_retry_delay 60 Maximum Quiesce Time True
max_transfer 0x40000 Maximum TRANSFER Size True
node_name 0x200200a0b811a662 FC Node Name False
pvid none Physical volume identifier False
q_err yes Use QERR bit True
q_type simple Queuing TYPE True
queue_depth 10 Queue DEPTH True
reassign_to 120 REASSIGN time out value True
reserve_policy no_reserve Reserve Policy True
rw_timeout 30 READ/WRITE time out value True
scsi_id 0x660e00 SCSI ID False
start_timeout 60 START unit time out value True
unique_id 3E213600A0B8000114632000073224919AD540F1815 FAStT03IBMfcp Unique device identifier False
ww_name 0x203200a0b811a662 FC World Wide Name False
$

c. If the reserve_policy attribute is set to anything other than no_reserve, set


the reserve_policy to no_reserve by running the following command,
where hdiskX is the name of the disk for which you want to set the
reserve_policy attribute to no_reserve.
chdev -dev hdiskX -attr reserve_policy=no_reserve
3. Verify that the virtual disks have the same unique identifier, physical identifier,
or an IEEE volume attribute, as follows:
a. To verify whether the virtual device has an IEEE volume attribute identifier
run the following command on the Virtual I/O Server:
lsdev -dev hdiskX -attr
If the output does not have the ieee_volname field, the virtual device has
no IEEE volume identifier attribute.
b. To verify whether the virtual device has a UDID, type the commands:
oem_setup_env
odmget -qattribute=unique_id CuAt
exit
Only disks that have a UDID will be listed in the output.

246 IBM PowerVM Live Partition Mobility


c. To verify whether the virtual device has a PVID, run the following
command:
lspv
The output shows the disks with their respective PVIDs
d. If the virtual disks do not have a UDID, IEEE volume attribute identifier, or
PVID, assign an identifier, as follows:
i. Upgrade your vendor software and repeat the procedure. Before
upgrading, be sure to preserve any virtual SCSI devices that you
created.
ii. If the upgrade does not produce a UDID or IEEE volume attribute
identifier, run the following command to put a PVID on the physical
volume:
chdev -dev hdiskX -attr pv=yes
4. Verify that the mobile partition has access to its physical storage from both the
source and destination environments, as follows:
a. From the Virtual Storage Management menu, click View/Modify Virtual
Storage.
b. On the Virtual Disk tab, verify that the logical partition does not own any
virtual disk.
c. On the Physical Volumes tab, verify that the physical volumes that
mapped to the mobile partition are exportable. See step 3 on page 246,
and see Example 7-4.

Example 7-4 The odmget command


$ oem_setup_env
# odmget -qattribute=unique_id CuAt
...

CuAt:
name = "hdisk7"
attribute = "unique_id"
value = "3E213600A0B8000291B080000520C023C6B410F1815 FAStT03IBMfcp"
type = "R"
generic = "D"
rep = "nl"
nls_index = 79

...
#

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 247


7.7.5 Preparing the virtual Fibre Channel configuration
If the mobile partition connects to physical storage through virtual Fibre Channel
adapters, the physical adapters that are assigned to the source and destination
Virtual I/O Server logical partitions must support N_Port ID Virtualization (NPIV).

Each virtual Fibre Channel adapter that is created on the mobile partition (or any
client logical partition) is assigned a pair of worldwide port names (WWPNs).
Both WWPNs are assigned to the physical storage that the mobile partition uses.
During normal operation, the mobile partition uses one WWPN to log on to the
SAN and access the physical storage.

When you move the mobile partition to the destination server, there is a brief
period of time during which the mobile partition runs on both the source and
destination servers. Because the mobile partition cannot log on to the SAN from
both the source and destination servers at the same time using the same
WWPN, the mobile partition uses the second WWPN to log on to the SAN from
the destination server during the migration. The WWPNs of each virtual Fibre
Channel adapter move with the mobile partition to the destination server.

Note: The Integrated Virtualization Manager automatically adds and removes


virtual Fibre Channel adapters to and from the management partition and the
logical partitions when you assign and unassign logical partitions to and from
physical Fibre Channel ports using the graphical user interface.

The first step is to assign virtual Fibre Channel adapters to your client partition
using the physical NPIV-capable adapter that is being used in your management
partition. Access the GUI and perform the following tasks:
1. From the I/O Adapter Management menu in the navigation area, click
View/Modify Virtual Fibre Channel.
The View/Modify Virtual Fibre Channel window opens. All the physical ports,
connected partitions, and available connections on the physical Fibre
Channel adapters that support NPIV.

248 IBM PowerVM Live Partition Mobility


Figure 7-13 shows the View/Modify Virtual Fibre Channel window.

Figure 7-13 View/Modify Virtual Fibre Channel window

You can now see the physical Fibre Channel adapters that are capable of
being used for hosting virtual Fibre Channel adapters.
2. Select the physical adapter to use and click Modify Partition Connections.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 249


The Virtual Fibre Channel Partition Connections window opens. See
Figure 7-14.

Figure 7-14 Virtual Fibre Channel Partition Connections window

3. You may now choose to add or remove virtual Fibre Channel adapter
assignments for a partition. In this case, you will select the partition of your
choice so that a virtual Fibre Channel adapter is created and WWPNs are
generated for the client. After you select a partition, the phrase Automatically
generate is displayed in the Worldwide Port Names column, as shown in
Figure 7-15. Click OK. The WWPNs for the client partition are generated.

Figure 7-15 Partition selected shows Automatically generate

Next, verify that the destination server provides the same virtual Fibre Channel
configuration as the source server so that the mobile partition can access its
physical storage on the SAN after it moves to the destination server, as follows:
1. Verify, for each virtual Fibre Channel adapter on the mobile partition, that both
WWPNs are assigned to the same physical storage on the SAN.

250 IBM PowerVM Live Partition Mobility


View and modify the properties of a logical partition, as follows:
a. Select View/Modify Partitions under Partition Management. The
View/Modify Partitions window opens.
b. Select the logical partition for which you want to view or modify the
properties.
c. From the More Tasks menu, select Properties. A new window named
Partition Properties appears.
d. Select the Storage tab to view or to modify the logical partition storage
settings. You can view and modify settings for virtual disks and physical
volumes.
e. Expand the Virtual Fibre Channel section. See Figure 7-16.

Figure 7-16 Virtual Fibre Channel on source system

f. Click OK to save your changes. The View/Modify Partitions page is


displayed. If the logical partition for which you changed the properties is
inactive, the changes take effect when you next activate the partition. If the
logical partition for which you changed the properties is active and is not
capable of DLPAR, you must shut down and reactivate the logical partition
before the changes take effect.
2. Verify that the switches to which the physical Fibre Channel adapters on both
the source and destination management partitions are cabled support NPIV.
3. Verify that the management partition on the destination server provides a
sufficient number of available physical ports for the mobile partition to

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 251


maintain access to its physical storage on the SAN from the destination
server. In the management GUI on the destination system, you may use the
View/Modify Virtual Fibre Channel option as described in step 1 on page 248.
4. Verify the number of physical ports that are available on the destination
server, as follows:
a. Determine the number of physical ports that the mobile partition uses on
the source server:
i. From the Partition Management menu, select View/Modify Partitions.
The View/Modify Partitions panel opens.
ii. Select the mobile partition.
iii. From the More Tasks menu, click Properties. A new window called
Partition Properties appears.
iv. Click the Storage tab.
v. Expand the Virtual Fibre Channel section. See Figure 7-17.

Figure 7-17 Virtual Fibre Channel on destination system

vi. Record the number of physical ports that are assigned to the mobile
partition and click OK.
b. Determine the number of physical ports that are available on the
management partition on the destination server:
i. From the I/O Adapter Management menu, select View/Modify Virtual
Fibre Channel. The View/Modify Virtual Fibre Channel panel opens.

252 IBM PowerVM Live Partition Mobility


ii. Record the number of physical ports with available connections.
iii. Compare the information that you identified in step a on page 252 to
the information that you identified in b on page 252.

Note: You may also use the lslparmigr command to verify that the
destination server provides enough available physical ports to support
the virtual Fibre Channel configuration of the mobile partition.

5. You may now choose to validate and migrate the mobile partition to the
destination server.

After migration is complete, notice the following points:


򐂰 The WWPNs assigned to the virtual Fibre Channel adapters on the partition
do not change, but the adapter assignment is now to the physical adapter
provided by the destination system.
򐂰 The number of connected partitions to the physical adapter also increases
and the number of available ports decrease.

7.7.6 Preparing the network configuration for Partition Mobility


During active partition migration, the two management partitions must be able to
communicate with each other. The network is used to pass the mobile partition
state information and other configuration data from the source environment to the
destination environment.

The mobile partition uses the virtual LAN for network access. The virtual LAN
must be bridged to a physical network using a virtual Ethernet bridge in the
management partition. The LAN must be configured so that the mobile partition
can continue to communicate with other necessary clients and servers after a
migration is completed.

Active Partition Mobility has no specific requirements on the mobile partition’s


memory size. The memory transfer is a procedure that does not interrupt a
mobile partition’s activity and might take time when a large memory configuration
is involved on a slow network. Therefore, use a high-bandwidth connection, such
as 1 Gbps Ethernet.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 253


To prepare your network configuration for Partition Mobility:
1. Configure a virtual Ethernet bridge on the source and destination
management partitions, as follows:
a. From the I/O Adapter Management menu, select View/Modify Virtual
Ethernet. The View/Modify Virtual Ethernet panel opens.
b. Click the Virtual Ethernet Bridge tab.
c. Set each Physical Adapter field to the physical adapter that you want to
use as the virtual Ethernet bridge for each virtual Ethernet network.
d. Click Apply for the changes take effect.
The result of these steps is shown in Figure 7-18.

Figure 7-18 Selecting physical adapter to be used as a virtual Ethernet bridge

You may assign a Host Ethernet Adapter (or Integrated Virtual Ethernet) port
to a logical partition so that the logical partition can directly access the
external network by completing the following steps:
a. From the I/O Adapter Management menu, select View/Modify Host
Ethernet Adapters.
b. Select a port with at least one available connection and click Properties.

254 IBM PowerVM Live Partition Mobility


c. Select the Connected Partitions tab.
d. Select the logical partition that you want to assign to the Host Ethernet
Adapter port and click OK. In the Performance area of the General tab you
may adjust the settings (such as speed, maximum transmission unit) for
the selected Host Ethernet Adapter port.
2. Create at least one virtual Ethernet adapter on the mobile partition:
a. From the Partition Management menu, select View/Modify Partitions.
b. Select the logical partition to which you want to assign the virtual Ethernet
adapter.
c. From the More Tasks menu, select Properties. A new window named
Partition Properties opens.
d. Select the Ethernet tab.
The result of these steps is shown in Figure 7-19.

Figure 7-19 Create virtual Ethernet adapter on the mobile partition

e. Create a virtual Ethernet adapter on the management partition:


i. In the Virtual Ethernet Adapters section, click Create Adapter.
ii. Enter the Virtual Ethernet ID and click OK to exit the Enter Virtual
Ethernet ID window.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 255


iii. Click OK to exit the Partition Properties window.
The result of these steps is shown in Figure 7-20.

Figure 7-20 Create a virtual Ethernet adapter on the management partition

f. Create a virtual Ethernet adapter on a client partition:

Note: This step is not required for inactive migration.

i. In the Virtual Ethernet Adapters section, select a virtual Ethernet for


the adapter and click OK.
ii. If no adapters are available, click Create Adapter to add a new adapter
to the list and then repeat the previous step.
3. Activate the mobile partition to establish communication between the virtual
Ethernet and management partition virtual Ethernet adapter.
4. Verify that the operating system of the mobile partition recognizes the new
Ethernet adapter.

256 IBM PowerVM Live Partition Mobility


7.7.7 Validating the Partition Mobility environment
To validate the Partition Mobility environment:
1. From the Partition Management menu, select View/Modify Partitions. The
View/Modify Partitions panel opens.
2. Select the logical partition you want to migrate.
3. From the More Tasks menu, select Migrate.
4. Enter the Remote IVM, Remote user ID, and Password of that remote user.
5. Click Validate to confirm that the changed settings are acceptable for
Partition Mobility. This is shown in Figure 7-3 on page 230.

7.7.8 Migrating the mobile partition


After successfully completing all prerequisite tasks, migrate the mobile partition:
1. From the Partition Management menu, select View/Modify Partitions.
The View/Modify Partitions panel opens.
2. Select the logical partition you want to migrate.
3. From the More Tasks menu, select Migrate.
4. Enter the Remote IVM, Remote user ID, and Password of that remote user.
5. Click Migrate.

Chapter 7. Integrated Virtualization Manager for Live Partition Mobility 257


The result is shown in Figure 7-21.

Figure 7-21 Partition is migrating

If necessary, perform the following optional post-requisite tasks to complete


migration of your logical partition:
1. Activate the mobile partition on the destination server in case of an inactive
partition migration.
2. Add physical adapters to the mobile partition on the destination server.
3. If any virtual terminal connections were lost during the migration, re-establish
the connections on the destination server.
4. Assign the mobile partition to a logical partition group.
5. If mobility-unaware applications were terminated on the mobile partition prior
to its movement, then restart those applications on the destination.

258 IBM PowerVM Live Partition Mobility


A

Appendix A. Error codes and logs


This Appendix lists System Reference Codes (SRCs) and operating system error
logs that pertain to Live Partition Mobility.

This appendix contains the following topics:


򐂰 “SRCs, current state” on page 260
򐂰 “SRC error codes” on page 261
򐂰 “IVM source and destination systems error codes” on page 262
򐂰 “Operating system error logs” on page 266

© Copyright IBM Corp. 2007, 2009. All rights reserved. 259


SRCs, current state
Table A-1 lists the SRCs that indicate the current state of a partition migration.

Table A-1 Progress SRCs


Code Meaning

2005 Partition is performing the drmgr command. The code is


displayed on the source server while the partition is waiting to
suspend, and on the destination server until the drmgr
processing has completed.

C2001020 Partition is the source of an inactive migration.

C2001030 Partition is the target of an inactive migration.

C2001040 Partition is the target of an active migration.

C2001080 Partition processors are stopped.

C2001082 Partition processors are restarted.

C20010FF Migration is complete.

D200A250 Partition has requested to suspend, as part of an active


migration.

D200AFFF Partition migration was canceled.

260 IBM PowerVM Live Partition Mobility


SRC error codes
Table A-2 lists SRC codes that indicate problems with a partition migration.

Table A-2 SRC error codes


Code Meaning

B2001130 Partition migration readiness check failed.

B2001131 Resume of LpQueues failed.

B2001132 Allocated LpEvents failed.

B2001133 Failed to lock the partition configuration of the partition.

B2001134 Failed to unlock the partition configuration of the partition.

B2001140 Processing of transferred data failed.

B2001141 Processing of transferred data failed.

B2001142 Processing of transferred data failed.

B2001143 Processing of transferred data failed.

B2001144 Failed to suspend virtual I/O for the partition.

B2001145 Failed to resume virtual I/O for the partition.

B2001151 Partition attempted a dump during migration.

B2002210 Data import failure.

B2002220 Data import failure.

B2008160 PFDS build failure.

Appendix A. Error codes and logs 261


IVM source and destination systems error codes
Error codes listed in this section are from the migration commands migrlpar or
lslparmigr and can be generated on either the source system (Table A-3) or the
destination system (Table A-4 on page 264). The prefix indicates whether the
error is generated on the source or on the destination system.

Table A-3 Source system generated error codes


Code Meaning

VIOSE01042023 The maximum number of migrations are already taking place on


the source managed system.

VIOSE0104202E The migrremote command on the target managed system


returned non-zero.

VIOSE01042026 The partition cannot be migrated when in its current power state.

VIOSE01042024 The partition cannot be migrated because its partition type is


i5/OS.

VIOSE01042025 The partition cannot be migrated because it is a Virtual I/O


Server.

VIOSE01042027 The partition cannot be migrated because it is in a workload


management group.

VIOSE01042028 The partition cannot be migrated because it has physical I/O


assignments.

VIOSE01042029 The partition cannot be migrated because it has HEA resource


assignments.

VIOSE0104202A A virtual slot owned by the partition has an adapter that cannot
be migrated.

VIOSE01042034 The partition cannot be migrated because the partition is


assigned storage that cannot be migrated.

VIOSE0104202B RMC is not active between the destination managed and a


Virtual I/O Server on the destination managed system.

VIOSE0104202C The Virtual I/O Server on the source managed system is not
marked as an MSP.

VIOSE0104202D The Virtual I/O Server partition is not capable of taking part in a
migration.

VIOSE01042021 The specified IP address cannot be found.

VIOSE01042036 The partition is unable to be migrated while active.

262 IBM PowerVM Live Partition Mobility


Code Meaning

VIOSE0104202F The partition is not the source of the migration. The executed
command must be run on the source.

VIOSE01042030 The partition is not in the process of a migration.

VIOSE01042037 An MSP on the source partition cannot communicate with any


MSP on the destination managed system.

VIOSE0104204A The partition with the given ID is not an MSP.

VIOSE01040104 A command run on the Virtual I/O Server failed.

VIOSE01042039 The migration of the partition has been stopped.

VIOSE0104203D The partition cannot be migrated because it has a virtual


Ethernet trunk adapter.

VIOSE01040F04 A warning that the partition has a physical I/O resource assigned
to it that will be removed as part of the inactive migration.

VIOSE0104203F The migrlpar process was unable to finish the migration on the
source managed system because other tasks have not finished.

VIOSE01042042 Failed to lock the storage configuration on the source Virtual I/O
Server.

VIOSE01042043, Failed to start the transmission of partition data on the source


VIOSE01042044 MSP.

VIOSE01042047 The destination manager does not support remote partition


migration.

VIOSE01042049 The migration has been stopped by the managed system.

VIOSE0104204D The source Virtual I/O Server generated an error while


processing a virtual LAN configuration.

VIOSE01040F05 The source Virtual I/O Server generated a warning while


processing a virtual LAN configuration.

VIOSE01042032 RMC is not active with the migrating partition. RMC needs to be
active to perform active migrations.

VIOSE0104203E The partition cannot be migrated because it is already involved


in a migration.

Appendix A. Error codes and logs 263


Table A-4 Destination system generated error codes
Code Meaning

VIOSE01090001 The migration requires a capability that the destination manager


does not support.

VIOSE01090003 Unable to find a Virtual I/O Server partition with the given ID on
the destination managed system.

VIOSE01090004 Unable to find a Virtual I/O Server partition with the given name
on the destination managed system.

VIOSE01090005 The destination managed system does not have access to the
storage assigned to the migrating partition.

VIOSE01090006 The maximum number of migrations are already taking place on


the destination managed system.

VIOSE01090008 The processor compatibility mode of the migrating partition is


not supported by the destination managed system.

VIOSE01090009 The memory region size on destination managed system is not


the same as the source managed system.

VIOSE0109000A The target Virtual I/O Server does not support partition mobility.

VIOSE0109000B A VLAN that is bridged on the source Virtual I/O Server is not
bridged on the target. This is only a warning message and does
not cause the migration to fail.

VIOSE0109000C The name of the migrating partition is already in use on the


destination managed system.

VIOSE0109000E The destination managed system already has the maximum


number of partitions.

VIOSE01090010 A given partition name does not match the name of the partition
with the given ID.

VIOSE01090011 The specified partition is not a mover service partition on the


target managed system.

VIOSE01090012 This code appears only if the source makes a clean-up request
to the target, but the partition on the target is not in the process
of a migration.

VIOSE01090014 An unhandled extended error was received from firmware.

VIOSE01090015 The target managed system does not have enough available
memory to create the partition.

VIOSE01090016 The target managed system does not have enough available
processing units to create the partition.

264 IBM PowerVM Live Partition Mobility


Code Meaning

VIOSE01090017 The target managed system does not have enough available
processors to create the partition.

VIOSE01090018 The availability priority of the mobile partition is higher than the
target management partition.

VIOSE01090019 The maximum processors value exceeds the largest supported


processor value on the target managed system.

VIOSE0109001B A command called locally on the Virtual I/O Server failed.

VIOSE0109001C The partition with the given ID on the target managed system is
not a Virtual I/O Server.

VIOSE0109001D A Virtual I/O Server partition with the given name does not exist
on the target managed system.

VIOSE0109002E The RMC connection to a partition (either Virtual I/O Server or


MSP) is not active.

VIOSE01090030 The destination managed system was not found.

VIOSE01090032 Unable to find the specified IP address on the target MSP

VIOSE01090033 Not enough memory is available for firmware to use with the new
partition.

VIOSE01090034 The processor pool ID specified was not found on the target
managed system.

VIOSE01090035 The processor pool name specified was not found on the target
managed system.

VIOSE01090036 The command to set the storage configuration for the partition
failed.

VIOSE01090037 The command to lock the storage configuration for the partition
failed.

VIOSE01090038 The command to start data transmission on the destination MSP


failed.

VIOSE01090039 The destination managed system was not able to clean up the
migration because not all partitions involved have finished.

VIOSE0109003A The destination managed system is not capable of taking part in


a migration.

VIOSE0109003B The partition with the specified name is not an MSP.

Appendix A. Error codes and logs 265


Code Meaning

VIOSE0109003C The destination Virtual I/O Server generated a warning while


processing a virtual LAN configuration. This is only a warning
and will not cause the migration to fail.

VIOSE0109003D The destination Virtual I/O Server generated an error while


processing a virtual LAN configuration.

VIOSE0109003E The partition cannot be migrated because the target Virtual I/O
Server has already reached its maximum number of virtual slots.

Operating system error logs


Table A-5 lists entries that can appear in the operating system error logs of the
partitions involved in a partition migration. The first column lists the label of the
entry, and the second column lists the partition that logs the entry.

Table A-5 Operating system error log entries


Error log entry labels Location (partition)

CLIENT_FAILURE Virtual I/O Server providing VSCSI services to


mobile partition

MVR_FORCE_SUSPEND Mover service partition Virtual I/O Server

MVR_MIG_COMPLETED Mover service partition Virtual I/O Server

MVR_MIG_ABORTED Mover service partition Virtual I/O Server

CLIENT_PMIG_STARTED AIX 5L mobile partition

CLIENT_PMIG_DONE AIX 5L mobile partition

266 IBM PowerVM Live Partition Mobility


Abbreviations and acronyms
ABI application binary interface CHRP Common Hardware
ACL access control list Reference Platform

AFPA Adaptive Fast Path CLI command-line interface


Architecture CLVM Concurrent LVM
AIO asynchronous I/O CPU central processing unit
AIX Advanced Interactive CRC cyclic redundancy check
Executive CSM Cluster System Management
APAR authorized program analysis CSV comma-separated values
report
CUoD Capacity Upgrade on
API application programming Demand
interface
DCM Dual Chip Module
ARP Address Resolution Protocol
DES Data Encryption Standard
ASMI Advanced System
Management Interface DGD dead gateway detection
BFF Backup File Format DHCP Dynamic Host Configuration
Protocol
BIND Berkeley Internet Name
Domain DLPAR dynamic LPAR
BIST Built-in Self-Test DMA direct memory access
BLV Boot Logical Volume DNS Domain Name System
BOOTP boot protocol DRM dynamic reconfiguration
manager
BOS base operating system
DR dynamic reconfiguration
BSD Berkeley Software Distribution
DVD digital versatile disc
BSR barrier-synchronization
register EC EtherChannel
CA certificate authority ECC Error Checking and
Correcting
CATE Certified Advanced Technical
Expert EOF end of file
CD compact disc EPOW Environmental and Power
Warning
CDE Common Desktop
Environment ERRM Event Response resource
manager
CD-R CD Recordable
ESS Enterprise Storage Server®
CD-ROM compact disc-read only
memory F/C feature code
CEC central electronics complex FC Fibre Channel
FCAL Fibre Channel Arbitrated Loop

© Copyright IBM Corp. 2007, 2009. All rights reserved. 267


FDX full duplex L1 Level 1
FLOP floating point operation L2 Level 2
FRU field replaceable unit L3 Level 3
FTP file transfer protocol LA Link Aggregation
GDPS® Geographically Dispersed LACP Link Aggregation Control
Parallel Sysplex™ Protocol
GID Group ID LAN local area network
GPFS™ General Parallel File System LDAP Lightweight Directory Access
GUI graphical user interface Protocol

HACMP High-Availability Cluster LED light emitting diode


Multi-Processing LHEA Logical Host Ethernet Adapter
HBA Host Bus Adapters LMB logical memory block
HEA Host Ethernet Adapter LPAR logical partition
HMC Hardware Management LPP Licensed Program Product
Console LUN logical unit number
HPT hardware page table LV logical volume
HTML Hypertext Markup Language LVCB logical volume control block
HTTP Hypertext Transfer Protocol LVM Logical Volume Manager
Hz Hertz MAC Media Access Control
I/O input/output Mbps megabits per second
IBM International Business MBps megabytes per second
Machines Corporation
MCM Multi-Chip Module
ID Identification
ML maintenance level
IDE Integrated Device Electronics
MP multiprocessor
IEEE Institute of Electrical and
Electronics Engineers MPIO multipath I/O

IP Internetwork Protocol MSP mover service partition

IPAT IP Address Takeover MTU maximum transmission unit

IPL initial program load NFS network file system

IPMP IP Multipathing NIB Network Interface Backup

ISV independent software vendor NIM Network Installation


Management
ITSO International Technical
Support Organization NIMOL NIM on Linux

IVM Integrated Virtualization NTP Network Time Protocol


Manager NVRAM non-volatile random access
JFS journaled file system memory

JIT just in time ODM Object Data Manager


OSPF Open Shortest Path First

268 IBM PowerVM Live Partition Mobility


PCI Peripheral Component RSA Rivest-Shamir-Adleman
Interconnect algorithm
PIC Pool Idle Count RSCT Reliable Scalable Cluster
PID process ID Technology

PKI public key infrastructure RSH remote shell

PLM Partition Load Manager SAN storage area network

PMAPI Performance Monitor API SCSI Small Computer System


Interface
PMP Project Management
Professional SDD Subsystem Device Driver

POST power-on self-test SEA Shared Ethernet Adapter

POWER Performance Optimization SIMD single-instruction,


with Enhanced Risc multiple-data
(Architecture) SMIT System Management
PTF program temporary fix Interface Tool

PTX Performance Toolbox SMP symmetric multiprocessor

PURR Processor Utilization SMS System Management


Resource Register Services

PV physical volume SMT simultaneous mulithreading

PVID physical volume identifier SP service processor

PVID Port Virtual LAN Identifier SPOT shared product object tree

QoS Quality of Service SRC System Resource Controller

RAID Redundant Array of SRN service request number


Independent Disks SSA Serial Storage Architecture
RAM random access memory SSH Secure Shell
RAS reliability, availability, and SSL Secure Sockets Layer
serviceability SUID set user ID
RCP remote copy SVC SAN Virtualization Controller
RDAC redundant disk array TCP/IP Transmission Control
controller Protocol/Internet Protocol
RIO remote I/O TSA Tivoli® System Automation
RIP Routing Information Protocol UDF Universal Disk Format
RISC reduced instruction set UDID Universal Disk Identification
computer
VASI virtual asynchronous services
RMC Resource Monitoring and interface
Control
VIPA virtual IP address
RPC remote procedure call
VG volume group
RPL remote program loader
VGDA Volume Group Descriptor
RPM Red Hat Package Manager Area

Abbreviations and acronyms 269


VGSA Volume Group Status Area
VLAN virtual local area network
VP virtual processor
VPD vital product data
VPN virtual private network
VRRP Virtual Router Redundancy
Protocol
VSD virtual shared disk
WLM workload manager

270 IBM PowerVM Live Partition Mobility


Related publications

The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.

IBM Redbooks
For information about ordering these publications, see “How to get IBM
Redbooks” on page 274. Note that several documents referenced here might be
available in softcopy only.
򐂰 AIX 5L Differences Guide Version 5.3 Edition, SG24-7463
򐂰 AIX 5L Practical Performance Tools and Tuning Guide, SG24-6478
򐂰 Effective System Management Using the IBM Hardware Management
Console for pSeries, SG24-7038
򐂰 IBM System p Advanced POWER Virtualization (PowerVM) Best Practices,
REDP-4194
򐂰 Implementing High Availability Cluster Multi-Processing (HACMP) Cookbook,
SG24-6769
򐂰 Introduction to pSeries Provisioning, SG24-6389
򐂰 Linux Applications on pSeries, SG24-6033
򐂰 Managing AIX Server Farms, SG24-6606
򐂰 NIM from A to Z in AIX 5L, SG24-7296
򐂰 Partitioning Implementations for IBM eServer p5 Servers, SG24-7039
򐂰 A Practical Guide for Resource Monitoring and Control (RMC), SG24-6615
򐂰 Integrated Virtualization Manager on IBM System p5, REDP-4061
򐂰 PowerVM Virtualization on IBM System p: Managing and Monitoring,
SG24-7590
򐂰 PowerVM Virtualization on IBM System p: Introduction and Configuration
Fourth Edition, SG24-7940
򐂰 IBM System p Advanced POWER Virtualization (PowerVM) Best Practices,
REDP-4194

© Copyright IBM Corp. 2007, 2009. All rights reserved. 271


򐂰 IBM BladeCenter JS12 and JS22 Implementation Guide, SG24-7655
򐂰 Integrated Virtual Ethernet Adapter Technical Overview and Introduction,
REDP-4340

Other publications
These publications are also relevant as further information sources:
򐂰 Documentation available on the support and services Web site includes:
– User guides
– System management guides
– Application programmer guides
– All commands reference volumes
– Files reference
– Technical reference volumes used by application programmers
The support and services Web site is:
http://www.ibm.com/systems/p/support/index.html
򐂰 Virtual I/O Server and support for Power Systems (including Advanced
PowerVM feature):
https://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/
home.html
򐂰 Linux for pSeries installation and administration (SLES 9):
http://www.ibm.com/developerworks/systems/library/es-pinstall/
򐂰 Linux virtualization on POWER5: A hands-on setup guide:
http://www.ibm.com/developerworks/edu/dw-esdd-virtual-i.html
򐂰 POWER5 Virtualization: How to set up the IBM Virtual I/O Server:
http://www.ibm.com/developerworks/aix/library/au-aix-vioserver-v2/
򐂰 Latest Multipath Subsystem Device Driver User's Guide
http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg
1S7000303

272 IBM PowerVM Live Partition Mobility


Online resources
These Web sites are also relevant as further information sources:
򐂰 AIX and Linux on Power Systems Community
http://www.ibm.com/systems/power/community/
򐂰 Capacity on Demand
http://www.ibm.com/systems/p/advantages/cod/
򐂰 IBM PowerVM
http://www.ibm.com/systems/power/software/virtualization/
򐂰 IBM System p and AIX Information Center
http://publib16.boulder.ibm.com/pseries/index.htm
򐂰 IBM System Planning Tool
http://www.ibm.com/systems/support/tools/systemplanningtool/
򐂰 IBM Systems Hardware Information Center
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp
򐂰 IBM Systems Workload Estimator
http://www.ibm.com/systems/support/tools/estimator/index.html
򐂰 Novell SUSE Linux Enterprise Server information
http://www.novell.com/products/server/index.html
򐂰 SCSI Technical Committee T10
http://www.t10.org
򐂰 SDDPCM software download page
http://www.ibm.com/support/docview.wss?uid=ssg1S4000201
򐂰 SDD software download page
http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430
&uid=ssg1S4000065&loc=en_US&cs=utf-8&lang=en
򐂰 Service and productivity tools for Linux on POWER
http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html
򐂰 Virtual I/O Server support for Power Systems home
http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html

Related publications 273


򐂰 Virtual I/O Server supported hardware
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d
atasheet.html
򐂰 Virtual I/O Server downloads
http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html

How to get IBM Redbooks


You can search for, view, or download Redbooks, IBM Redpapers™, Technotes,
draft publications and Additional materials, as well as order hardcopy Redbooks,
at this Web site:
ibm.com/redbooks

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

274 IBM PowerVM Live Partition Mobility


Index
AIX 5L 51
A AIX 6
accounting, AIX 43
Live Workload Partitions 16
active migration 5, 25–26, 31
alternate error log 34
capability 23, 34
applications
compatibility 23, 34
check-migrate 35
completion 39
migration capability 34
concurrent 35
prepare for migration 38
configuration checks 34
reconfiguration notification 39
definition 5
ARP 39
differences 14
ASMI 55
dirty page 38
attribute
entitlements 25
mover service partition 21, 93
example 9
reserve_policy 92
HMC 13
time reference 22, 94
memory modification 38
VASI 93
migratability 25
availability 15
migration phase 36
checks 35
mover service partition 13
requirements 4
MSP selection 41
multiple concurrent migrations 128
preparation 32 B
prerequisites 25 barrier-synchronization register
processor compatibility mode 205–206 See BSR
reactivation 39 basic environments 90–91
recovery 218 battery power 24, 55
remote 130 bootlist command 156
requirements 93, 129 bosboot command 156
shared Ethernet adapter 8 BSR 23, 25, 34, 72
state 31
stopping 41
time 129
C
capability 28
validation 216, 218 active migration 34
VIOS selection 40 operating system 34
workflow 6, 13, 34 Capacity on Demand 57, 60, 238
active profile 28 cfgmgr command 154, 157, 201
adapters changes
dedicated 25, 32, 34 non-reversible 31
physical 25, 29, 34 reversible 31
virtual 29 rollback 31, 42
advanced accounting 43 chdev command 80–81, 246–247
Advanced POWER Virtualization 4 check-migrate request 35
AIX 28, 35, 43 chvg command 155
kernel extensions 39, 43 CLI 41, 162, 177

© Copyright IBM Corp. 2007, 2009. All rights reserved. 275


command line interface odmget 80
See CLI oem_setup_env 80
commands compatibility 28
AIX active migration 34
bootlist 156 completion
bosboot 156 active migration 39
cfgmgr 154, 157, 201 configuration
chvg 155 memory 37
errpt 215, 220 processor 37
extendvg 155 virtual adapter 37
filemon 43 configuration checks 34
lsdev 154, 157, 201
migratepv 156
reducevg 156
D
dedicated adapters 25, 32
ssh-keygen 163
dedicated I/O 93
topas 43
dedicated resources 149
tprof 43
demand paging 38
clients
dirty memory pages 38, 42
lsdev 88
disks, internal 34
mktcpip 88
distance, maximum 88
HMC
dynamic reconfiguration event
lslic 49
check-migrate 35
lslparmigr 41, 121, 128, 135, 145, 166, 172,
post 39
175, 214
post migration 35
lsrefcode 214
prepare for migration 37
lsrsrc 67
lssyscfg 175
migrlpar 129, 145, 175, 218 E
mkauthkeys 138, 163, 173 environments
ssh 163 basic 90
IVM errlog command 215, 220
chdev 246–247 error logging partition 28
ioslevel 224 error logs 266
lsdev 245–246 error messages 101
lslparmigr 253 errpt command 215, 220
lspv 246–247 EtherChannel 87
lssyscfg 240 exclusive-use processor resource set (XRSET) 43
lsvet 238 extendvg command 155
odmget 246
VIOS
F
chdev 80–81, 151 filemon command 43
errlog 215, 220 firmware 48
ioslevel 51, 64 supported migration matrix 50
lsattr 81
lsdev 79, 191
lslparmigr 172 G
lsmap 191 gratuitous ARP 39
lspv 81
mkvdev 151

276 IBM PowerVM Live Partition Mobility


H multiple concurrent migrations 128
HACMP 16 partition profile 27, 30
Hardware Management Console processor compatibility mode 205, 208
See HMC remote 130
hardware page table 32 rollback 31
HEA 32 shared Ethernet adapter 8
IVE 22 stopping 31
heart-beat 38 validation 28, 216, 218
help 274 workflow 5, 12, 30
High Availability Cluster Multiprocessing 16 infrastructure flexibility 3
HMC 20 Integrated Virtual Ethernet 78
configuration 8, 11 See IVE
dual configuration 130 Integrated Virtualization Manager 221
local 131 activation of edition key 238
locking mechanism 130 firmware 222
migration progress window 215 how active migration works 225
preparation 61 how inactive migration works 226
recovery actions 217 migrating 257
redundant 23 network 253
reference code 214 operating system requirements 224
refresh destination system 139 partition workload group 242
remote 131 physical adapters 243
requirements 47 preparation 232
RMC connection 25 processor compatibility mode 240
roles requirements 222
hmcsuperadmin 56, 138 reserve policy 245
upgrade 62 updates 223
hmcsuperadmin role 56, 138 validating 253, 257
HPT 32 validation for active migration 226–227, 231
hscroot user role 56 virtual Fibre Channel 248
huge pages 25, 34, 74 internal disks 34
hypervisor 21, 29, 34, 38, 42 invalid state 42
processor compatibility mode 205 ioslevel command 51, 64, 224
iSCSI 24
IVE 22
I HEA 22
IEEE volume attribute 80
LHEA 25
inactive migration 5, 25, 27
active profile 28
capability 23, 28 K
compatibility 23, 28 kernel extensions 39, 43
completion phase 31 check-migrate 35
dedicated I/O adapters 92 prepare for migration 38
definition 5
example 9, 14
L
HMC 12 large pages, AIX 43
huge pages 25 LHEA 25, 78
migratability 25 Link Aggregation 87
migration phase 29 Linux 28, 44, 51

Index 277
Live Application Mobility 16 migratepv command 156
Live Partition Mobility migration
high availability 15 active 31
PowerVM support 16 inactive 27
preparation 53 messages 103
remote 130 errors 101
Live Workload Partitions 16 warnings 101
LMB 34, 54 mover service partition selection 111
logical HEA processor compatibility mode 205
See LHEA profile 27
logical memory block remote 130
See LMB shared processor pool selection 114
logical unit number 31 specifying the destination profile 106
logical volumes 24, 29 starting state 41
LPAR workload group 25, 28, 32 state 31
lsattr command 81 status window 116
lsdev command 79, 88, 154, 157, 191, 201, steps 99
245–246 validation 110
lslic command 49 virtual Fibre Channel 193
lslparmigr command 41, 121, 128, 135, 145, 166, virtual SCSI adapter assignment 113
172, 175, 214, 253 VLAN 112
remote capability 168 workflow 26
lsmap command 191 migration phase
lspv command 81, 246–247 active migration 36
lsrefcode command 214 migrlpar command 129, 145, 175, 218
lsrsrc command 67 example 165
lssyscfg command 175, 240 migrate 165
lsvet command 238 recovery 165
LUN stop 165
mapping 31, 39 validate 165
minimal requirements 91
HMC 91
M LMB 91
MAC address 28
Network connection 91
uniqueness 35
partition 91
memory
storage 92
affinity 43
VIOS 91
available 56
virtual SCSI 91
configuration 37
mkauthkeys command 138, 163, 173
dirty page 38
mktcpip command 88
footprint 38
mkvdev command 151
LPAR memory size 42
mobility-aware 79
modification 38
mobility-safe 79
pages 38
mover service partition 24
messages 101, 110
See MSP
migratability 25
MPIO 30, 35
huge pages 25
MSP 21, 25, 32–33, 36–39, 64
redundant error path 25
configuration 96, 129, 143
versus partition readiness 25
definition 12

278 IBM PowerVM Live Partition Mobility


error log 215 migration from single to dual VIOS 126
information 168 migration recovery 217
lslparmigr command 168, 171 minimal requirements 91
network 42 mirroring on two VIOS 121
performance 42 multipath on two VIOS
selection 41 MPIO 124
virtual Fibre Channel 195
name 25, 28, 35
N preparation 66
network
profile 21, 26, 30, 79
performance 42
quiescing 38
preparation 87
readiness versus migratability 25
requirements 8, 12, 52, 131
recovery 220
state transfer 42
redundant error path 25
network time protocol
requirements 7
See NTP
resumption 38
new system deployment 4
service 28
non-volatile RAM 32
shell 30, 37
NPIV 7, 31, 39, 187
state 26, 28, 31–32, 35
benefits 188
state transfer 38
port enablement 190
type 34
switch 189
validation 99, 109, 143
NTP 22, 32
visibility 39
NVRAM 27, 31–32
workload group 25, 32
partition workload groups 70
O performance 42
odmget command 80, 246 performance monitor API 22
oem_setup_env command 80 performance monitoring 43
operating system physical adapters 25, 29, 34
migration capability 34 requirements 7
requirements 51 physical I/O 76
version 66 physical identifier 80
physical resources 149
pinned memory 43
P
pages PMAPI 22, 38
demand paging 38 post migration reconfiguration event 35
transmission 38 POWER Hypervisor 21, 34, 38, 42
partition powered off state 42
alternate error log 34 PowerVM 16
configuration 32 requirements 16
error log 215, 220 Workload Partitions Manager 16
error logging 28 PowerVM Enterprise Edition 47
functional state 39 enter activation code 48
information 170 view history log 47
lslparmigr command 170 prepare for migration event 37
memory 32 prerequisites 23
memory size 42 processor compatibility mode 205
migration capability 34 active migration 206
change 206

Index 279
current 205, 209 capability 23
default 208 compatibility 23
enhanced 205 example 9
examples 206 hardware 7
inactive migration 208 huge pages 25
non-enhanced 206 memory 25
preferred 205, 208 name 25
supported 206 network 8, 12, 129
verification 208 partition 7
processors physical adapter 7
available 58 physical adapters 93
binding 43 processors 25
configuration 37 redundant error path 25
state 32 RMC 93
profile 21, 26 storage 8, 92
active 28 synchronization 32
last activated 27, 30 VASI 93
name 26, 78 VIOS 7–8
pending values 37 virtual SCSI 91
workload group 25
reserve_policy attributes 79, 92
R resource availability 35
RAS tools 44
resource balancing 4
reactivation
Resource Monitoring and Control
active migration 39
See RMC
readiness 24
resource sets, AIX 43
battery power 24
resource state 32
infrastructure 25
RMC 20, 24–25, 28, 34, 66, 131
server 24
rollback 31, 42
Red Hat Enterprise Linux 51
Redbooks Web site 274
Contact us xix S
reducevg command 156 SAN 24–25, 32, 131
redundant error path reporting 68 SCSI reservation 24
remote migration 130–131 SEA 8, 32, 87
considerations 135 server readiness 24
information 169 service partition 28
infrastructure 133 shared Ethernet adapter
lslparmigr command 169 See SEA
migration 141 shared processor pool 147, 171
network test 136 CLI 148
private network 132 information 168
requirements 132 lslparmigr command 168
workflow 131 SIMD 23
required I/O 76 SMS 27
requirements SSH
active migration 93 key authentication 132
adapters 25 key generation 136
battery power 24 ssh command 163

280 IBM PowerVM Live Partition Mobility


ssh-keygen command 163 MAC address 28, 35
state partition name 25, 35
active partition 32 upgrade licensed internal code 49
changes 35
invalid 42
migration starting 41
V
validation 121
of resource 32
inactive migration 28
powered off 42
remote migration 138
processor 32
workflow 28
transfer 29
VASI 21, 37, 39
transmission 38
VIOS
virtual adapter 32
configuration 129
state transfer
dual 121
network 42
error log 215
stopping
minimal requirements 91
active migration 41
multiple 120
inactive migration 31
preparation 63
storage
requirements 7–8, 50
preparation 79
See also Virtual I/O Server
requirements 8, 51
shared Ethernet failover 120
storage area network
single to dual 126
See SAN
VASI 93
storage pool 24
version 64
SUSE Linux Enterprise Server 51
virtual adapter 29
suspend window 39
configuration 37
synchronization 32
migration map 30, 35, 38, 40
system
slot numbering 120
preparation 54
state 32
reference codes 260
virtual device mapping 33
requirements 47
virtual Ethernet 24
trace 43
virtual Fibre Channel 24, 30, 35, 38–40, 120, 131,
187
T basic configuration 187
throttling workload 38 benefits 188
time migration 193
synchronization 32 multipathing 196
time of day 32 preparation 190
time reference configuration 98 requirements 189
time reference partition (TRP) 22 worldwide port name (WWPN) 190
time-of-day clocks 22 Virtual I/O Server 25, 29, 32, 34
synchronization 65 information 168
topas command 43 lslparmigr command 168
tprof command 43 See also VIOS 21
TRP 22 selection for active migration 40
virtual optical devices 32
virtual SCSI 24, 30, 34–35, 38–39, 120, 131
U mappings 25, 165
unique identifier 80
reserve_policy 92
uniqueness

Index 281
virtual serial I/O 69
default adapters 26
VLAN 25, 32

W
warning messages 101
warnings 103
workflow 26
active migration 34
inactive migration 30
validation 28
workload
throttling 38
workload group 25, 28, 32
workload manager 43
workload partition
See WPAR
WPAR
migration 16
requirements 17
WWPN 190

X
XRSET 43

282 IBM PowerVM Live Partition Mobility


IBM PowerVM Live Partition Mobility
(0.5” spine)
0.475”<->0.875”
250 <-> 459 pages
Back cover ®

IBM PowerVM
Live Partition Mobility
Explore the PowerVM Live Partition Mobility is the next step in the IBMs Power
Enterprise Edition Systems virtualization continuum. It can be combined with INTERNATIONAL
Live Partition other virtualization technologies, such as logical partitions, TECHNICAL
Mobility Live Workload Partitions, and the SAN Volume Controller, to SUPPORT
provide a fully virtualized computing platform that offers the ORGANIZATION
degree of system and infrastructure flexibility required by
Move active and
today’s production data centers.
inactive partitions
between servers This IBM Redbooks publication discusses how Live Partition BUILDING TECHNICAL
Mobility can help technical professionals, enterprise INFORMATION BASED ON
Manage partition architects, and system administrators: PRACTICAL EXPERIENCE
migration with an 򐂰 Migrate entire running AIX and Linux partitions and
HMC or IVM hosted applications from one physical server to another IBM Redbooks are developed by
without disrupting services and loads. the IBM International Technical
򐂰 Meet stringent service-level agreements. Support Organization. Experts
򐂰 Rebalance loads across systems quickly, with support for from IBM, Customers and
multiple concurrent migrations. Partners from around the world
create timely technical
򐂰 Use a migration wizard for single partition migrations. information based on realistic
This book can help you understand, plan, prepare, and scenarios. Specific
recommendations are provided
perform partition migration on IBM Power Systems servers to help you implement IT
that are running AIX. solutions more effectively in
your environment.

For more information:


ibm.com/redbooks

SG24-7460-01 ISBN 0738432423

You might also like