Professional Documents
Culture Documents
Implementing The IBM System Storage SAN Volume Controller V7.4
Implementing The IBM System Storage SAN Volume Controller V7.4
Implementing the
IBM System Storage
SAN Volume Controller V7.4
Jon Tate
Frank Enders
Torben Jensen
Hartmut Lonzer
Libor Miklas
Marcin Tabinowski
Redbooks
International Technical Support Organization
April 2015
SG24-7933-03
Note: Before using this information and the product it supports, read the information in “Notices” on
page xvii.
This edition applies to IBM SAN Volume Controller software Version 7.4 (includes pre-GA code in some areas)
and the IBM SAN Volume Controller 2145-DH8.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Contents v
5.4.11 Running SAN Volume Controller commands from an AIX host system . . . . . . 191
5.5 Microsoft Windows information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.5.1 Configuring Windows Server 2008 and 2012 hosts . . . . . . . . . . . . . . . . . . . . . . 192
5.5.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.5.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 193
5.5.4 Installing and configuring the host adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.5.5 Changing the disk timeout on Windows Server . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.5.6 Installing the SDDDSM multipath driver on Windows . . . . . . . . . . . . . . . . . . . . . 194
5.5.7 Attaching SVC volumes to Microsoft Windows Server 2008 R2 and to Windows
Server 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.5.8 Extending a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.5.9 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.6 Using SAN Volume Controller CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . 211
5.7 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.7.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.7.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 213
5.7.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 213
5.7.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.7.5 Creating free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.7.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.8 Specific Linux (on x86/x86_64) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.8.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.9 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.9.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.9.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 227
5.9.3 HBAs for hosts that are running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.9.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.9.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.9.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.9.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.9.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.9.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . . 232
5.9.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.9.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.10 Sun Solaris hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.10.1 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.11 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.11.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 237
5.11.2 Supported multipath solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.11.3 Coexistence of SDD and PVLinks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.11.4 Using an IBM SAN Volume Controller volume as a cluster lock disk . . . . . . . . 238
5.11.5 Support for HP-UX with more than eight LUNs. . . . . . . . . . . . . . . . . . . . . . . . . 238
5.12 Using the SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . 238
5.13 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.13.1 SAN Volume Controller storage subsystem attachment guidelines . . . . . . . . . 240
Contents vii
Chapter 7. Advanced features for storage efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 361
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.2 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.2.1 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
7.2.2 SSD arrays and flash MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
7.2.3 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
7.2.4 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.2.5 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
7.2.6 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.2.7 Modifying the Easy Tier setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
7.2.8 Monitoring tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
7.2.9 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
7.3 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
7.3.1 Configuring a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
7.3.2 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
7.3.3 Limitations of virtual capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
7.4 Real-time Compression Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
7.4.1 Common use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
7.4.2 Real-time Compression concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
7.4.3 Random Access Compression Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
7.4.4 Random Access Compression Engine in the SVC software stack . . . . . . . . . . . 394
7.4.5 Data write flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
7.4.6 Data read flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.4.7 Compression of existing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.4.8 Configuring compressed volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
7.4.9 SVC 2145-DH8 node software and hardware updates related to Real-time
Compression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
7.4.10 Software enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
7.4.11 Hardware updates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
7.4.12 Dual RACE component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
viii Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
8.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
8.4.12 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
8.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
8.4.14 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8.4.15 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8.4.16 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
8.4.17 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
8.4.18 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 432
8.4.19 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
8.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.6.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.6.2 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
8.6.3 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
8.6.4 IP partnership and SVC terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
8.6.5 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
8.6.6 Remote copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
8.6.7 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
8.6.8 Setting up the SVC system IP partnership by using the GUI . . . . . . . . . . . . . . . 454
8.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
8.7.1 Multiple SVC system mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
8.7.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
8.7.3 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
8.7.4 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
8.7.5 Synchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
8.7.6 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.7.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.7.8 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.7.9 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
8.7.10 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
8.7.11 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
8.7.12 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
8.7.13 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
8.7.14 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.7.15 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.7.16 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.7.17 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
8.7.18 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 473
8.7.19 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
8.7.20 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
8.8 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
8.8.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
8.8.2 Listing available SVC system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
8.8.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
8.8.4 SVC system partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
8.8.5 Creating a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . . 486
8.8.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 486
8.8.7 Changing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 486
8.8.8 Changing a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . 487
8.8.9 Starting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 487
8.8.10 Stopping a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 487
8.8.11 Starting a Metro Mirror/Global Mirror Consistency Group. . . . . . . . . . . . . . . . . 488
Contents ix
8.8.12 Stopping a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . 488
8.8.13 Deleting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 488
8.8.14 Deleting a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . 488
8.8.15 Reversing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 489
8.8.16 Reversing a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . 489
8.9 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
8.9.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
8.9.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
Chapter 9. SAN Volume Controller operations using the command-line interface. . 493
9.1 Normal operations by using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
9.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
9.2 New commands and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
9.3 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 500
9.3.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
9.3.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.3.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.3.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
9.3.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
9.3.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.3.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.3.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
9.3.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
9.3.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.3.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.3.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
9.3.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
9.3.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
9.3.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.4.1 Creating an FC-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.4.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
9.4.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.4.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.4.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.4.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
9.5 Working with the Ethernet port for iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
9.6 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
9.6.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
9.6.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
9.6.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
9.6.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
9.6.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
9.6.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
9.6.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
9.6.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
9.6.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.6.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.6.11 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
9.6.12 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.6.13 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.6.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
9.6.15 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 537
Contents xi
9.13.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . 579
9.13.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
9.13.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
9.14 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
9.14.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
9.14.2 Creating a SAN Volume Controller partnership between ITSO_SVC2 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
9.14.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
9.14.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
9.14.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 592
9.14.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
9.14.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
9.14.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
9.14.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
9.14.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 595
9.14.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 596
9.14.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 597
9.14.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . . 598
9.14.14 Changing the copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . 598
9.14.15 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 599
9.14.16 Switching the copy direction for a Metro Mirror Consistency Group . . . . . . . . 600
9.14.17 Creating a SAN Volume Controller partnership among clustered systems. . . 601
9.14.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
9.15 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
9.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
9.15.2 Creating a SAN Volume Controller partnership between ITSO_SVC2 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
9.15.3 Changing link tolerance and system delay simulation . . . . . . . . . . . . . . . . . . . 610
9.15.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 611
9.15.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
9.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 613
9.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
9.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 614
9.15.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
9.15.10 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
9.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
9.15.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 617
9.15.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 617
9.15.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 618
9.15.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . . 619
9.15.16 Changing the direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
9.15.17 Switching the copy direction for a Global Mirror relationship . . . . . . . . . . . . . 620
9.15.18 Switching the copy direction for a Global Mirror Consistency Group . . . . . . . 621
9.15.19 Changing a Global Mirror relationship to the cycling mode. . . . . . . . . . . . . . . 622
9.15.20 Creating the thin-provisioned Change Volumes . . . . . . . . . . . . . . . . . . . . . . . 624
9.15.21 Stopping the stand-alone remote copy relationship . . . . . . . . . . . . . . . . . . . . 624
9.15.22 Setting the cycling mode on the stand-alone remote copy relationship . . . . . 625
9.15.23 Setting the Change Volume on the master volume. . . . . . . . . . . . . . . . . . . . . 625
9.15.24 Setting the Change Volume on the auxiliary volume . . . . . . . . . . . . . . . . . . . 626
9.15.25 Starting the stand-alone relationship in the cycling mode. . . . . . . . . . . . . . . . 626
9.15.26 Stopping the Consistency Group to change the cycling mode . . . . . . . . . . . . 627
9.15.27 Setting the cycling mode on the Consistency Group . . . . . . . . . . . . . . . . . . . 628
9.15.28 Setting the Change Volume on the master volume relationships of the Consistency
xii Implementing the IBM System Storage SAN Volume Controller V7.4
Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
9.15.29 Setting the Change Volumes on the auxiliary volumes. . . . . . . . . . . . . . . . . . 630
9.15.30 Starting the Consistency Group CG_W2K3_GM in the cycling mode . . . . . . 631
9.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
9.16.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
9.16.2 Running the maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
9.16.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
9.16.4 Setting the syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
9.16.5 Configuring error notification by using an email server . . . . . . . . . . . . . . . . . . . 641
9.16.6 Analyzing the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
9.16.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
9.16.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
9.17 Backing up the SAN Volume Controller system configuration . . . . . . . . . . . . . . . . . 647
9.17.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
9.18 Restoring the SAN Volume Controller clustered system configuration . . . . . . . . . . . 649
9.18.1 Deleting the configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
9.19 Working with the SAN Volume Controller quorum MDisks . . . . . . . . . . . . . . . . . . . . 650
9.19.1 Listing the SAN Volume Controller quorum MDisks . . . . . . . . . . . . . . . . . . . . . 650
9.19.2 Changing the SAN Volume Controller quorum MDisks. . . . . . . . . . . . . . . . . . . 650
9.20 Working with the Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
9.20.1 SAN Volume Controller CLI Service Assistant menu . . . . . . . . . . . . . . . . . . . . 651
9.21 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
9.22 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
Chapter 10. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . 655
10.1 Normal SVC operations using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
10.1.1 Introduction to the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
10.1.2 Content view organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
10.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
10.2 Monitoring menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
10.2.1 System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
10.2.2 System details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
10.2.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
10.2.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
10.3 Working with external disk controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.3.1 Viewing the disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.3.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.3.3 Site awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
10.3.4 Discovering MDisks from the external panel. . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.4 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.4.1 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.4.2 Creating storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
10.4.3 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
10.4.4 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
10.5 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
10.5.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
10.5.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
10.5.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
10.5.4 Assigning MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
10.5.5 Unassigning MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
10.6 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
10.7 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
10.7.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
Contents xiii
10.7.2 Adding a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
10.7.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
10.7.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
10.7.5 Creating or modifying a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
10.7.6 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
10.7.7 Deleting all host mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
10.8 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
10.8.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
10.8.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
10.8.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
10.8.4 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
10.8.5 Modifying thin-provisioned or compressed volume properties . . . . . . . . . . . . . 710
10.8.6 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
10.8.7 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
10.8.8 Deleting all host mappings for a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
10.8.9 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
10.8.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
10.8.11 Shrinking the real capacity of a thin-provisioned or compressed volume . . . . 721
10.8.12 Expanding the real capacity of a thin-provisioned or compressed volume . . . 723
10.8.13 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
10.8.14 Adding a mirrored copy to an existing volume . . . . . . . . . . . . . . . . . . . . . . . . 727
10.8.15 Deleting a mirrored copy from a volume mirror. . . . . . . . . . . . . . . . . . . . . . . . 729
10.8.16 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
10.8.17 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
10.8.18 Migrating to a thin-provisioned volume by using volume mirroring . . . . . . . . . 732
10.8.19 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
10.8.20 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 735
10.8.21 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
10.9 Copy Services and managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
10.9.1 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
10.9.2 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
10.9.3 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
10.9.4 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
10.9.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
10.9.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 751
10.9.7 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
10.9.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 756
10.9.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 757
10.9.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
10.9.11 Renaming a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
10.9.12 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
10.9.13 Deleting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
10.9.14 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
10.9.15 Starting the FlashCopy copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
10.9.16 Stopping the FlashCopy copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
10.10 Copy Services: Managing remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
10.10.1 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
10.10.2 Creating a Fibre Channel partnership between two remote SVC systems . . . 767
10.10.3 Creating an IP partnership between remote SVC systems. . . . . . . . . . . . . . . 770
10.10.4 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . 772
10.10.5 Creating a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
10.10.6 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
10.10.7 Renaming a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
xiv Implementing the IBM System Storage SAN Volume Controller V7.4
10.10.8 Moving a stand-alone remote copy relationship to a Consistency Group . . . . 784
10.10.9 Removing a remote copy relationship from a Consistency Group . . . . . . . . . 785
10.10.10 Starting a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
10.10.11 Starting a remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 787
10.10.12 Switching the copy direction for a remote copy relationship . . . . . . . . . . . . . 789
10.10.13 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . 791
10.10.14 Stopping a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
10.10.15 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
10.10.16 Deleting stand-alone remote copy relationships . . . . . . . . . . . . . . . . . . . . . . 796
10.10.17 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
10.11 Managing the SAN Volume Controller clustered system by using the GUI. . . . . . . 798
10.11.1 System status information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
10.11.2 View I/O Groups and their associated nodes . . . . . . . . . . . . . . . . . . . . . . . . . 800
10.11.3 View SVC clustered system properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
10.11.4 Renaming the SAN Volume Controller clustered system . . . . . . . . . . . . . . . . 803
10.11.5 Renaming the site information of the nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 805
10.11.6 Rename a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
10.11.7 Shutting down the SAN Volume Controller clustered system . . . . . . . . . . . . . 807
10.11.8 Power off a single node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
10.12 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
10.12.1 Updating system software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
10.12.2 Update drive software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
10.13 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
10.14 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
10.14.1 Viewing node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
10.14.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
10.14.3 Adding a node to the SAN Volume Controller clustered system. . . . . . . . . . . 817
10.14.4 Removing a node from the SAN Volume Controller clustered system . . . . . . 820
10.15 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
10.15.1 Events panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
10.15.2 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
10.15.3 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
10.16 User management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
10.16.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
10.16.2 Modifying the user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
10.16.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
10.16.4 Removing a user SSH public key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
10.16.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
10.16.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
10.16.7 Modifying the user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
10.16.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
10.16.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
10.17 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
10.17.1 Configuring the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
10.17.2 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
10.17.3 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
10.17.4 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
10.17.5 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
10.17.6 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
10.17.7 System options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
10.17.8 Date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
10.17.9 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
10.17.10 Setting GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
Contents xv
10.18 Upgrading the SAN Volume Controller software . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
10.18.1 Precautions before the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
10.18.2 SAN Volume Controller upgrade test utility. . . . . . . . . . . . . . . . . . . . . . . . . . . 859
10.18.3 Upgrade procedure from version 7.3.x.x to version 7.4.x.x. . . . . . . . . . . . . . . 860
10.18.4 Upgrade procedure from version 7.4.x.x to 7.4.y.y . . . . . . . . . . . . . . . . . . . . . 867
xvi Implementing the IBM System Storage SAN Volume Controller V7.4
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® FlashSystem™ Redbooks®
AIX 5L™ Global Technology Services® Redbooks (logo) ®
DB2® GPFS™ Smarter Planet®
developerWorks® HyperSwap® Storwize®
DS4000® IBM® System p®
DS5000™ IBM FlashSystem™ System Storage®
DS6000™ POWER® Tivoli®
DS8000® Power Systems™ WebSphere®
Easy Tier® pureScale® XIV®
FlashCopy® Real-time Compression™
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xviii Implementing the IBM System Storage SAN Volume Controller V7.4
IBM REDBOOKS PROMOTIONS
Download
Android
iOS
Now
This IBM® Redbooks® publication is a detailed technical guide to the IBM System
Storage® SAN Volume Controller Version 7.4.
The SAN Volume Controller (SVC) is a virtualization appliance solution, which maps
virtualized volumes that are visible to hosts and applications to physical volumes on storage
devices. Each server within the storage area network (SAN) has its own set of virtual storage
addresses that are mapped to physical addresses. If the physical addresses change, the
server continues running by using the same virtual addresses that it had before. Therefore,
volumes or storage can be added or moved while the server is still running.
The IBM virtualization technology improves the management of information at the “block”
level in a network, which enables applications and servers to share storage devices on a
network.
This book is intended for readers who want to implement the SVC at a 7.4 release level with
minimal effort.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Frank Enders has worked for the last seven years for EMEA
IBM System Storage SAN Volume Controller and V7k Level 2
support in Mainz, Germany, and his duties include pre- and
post-sales support. He has worked for IBM Germany for 20
years and started as a technician in disk production for IBM
Mainz, and changed to magnetic head production four years
later. When IBM closed disk production in Mainz in 2001, he
also changed his role and continued working for IBM within
ESCC Mainz as a team member of the Installation Readiness
team for products, such as the IBM DS8000®, IBM DS6000™,
and the IBM System Storage SAN Volume Controller. During
that time, he studied for four years to gain a diploma in
Electrical Engineering.
xxii Implementing the IBM System Storage SAN Volume Controller V7.4
Thanks to the following people for their contributions to this project:
Barry Whyte
Katja Gebuhr
Paul Cashman
Paul Merrison
Nicholas Sunderland
Stephen Wright
John Fairhurst
Trevor Boardman
IBM Hursley, UK
Nick Clayton
IBM Systems & Technology Group, UK
Helen Burton
IBM Systems & Technology Group, Boulder, CO, US
Special thanks to the Brocade Communications Systems staff in San Jose, California, for
their unparalleled support of this residency in terms of equipment and support in many areas:
Silviano Gaona
Sangam Racherla
Brian Steffler
Marcus Thordal
Jim Baldyga
Brocade Communications Systems
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Preface xxiii
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xxiv Implementing the IBM System Storage SAN Volume Controller V7.4
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-7933-03
for Implementing the IBM System Storage SAN Volume Controller V7.4
as created or updated on April 24, 2015.
New information
New hardware
IBM Easy Tier®
GUI
Changed information
GUI
Planning
Copy services
And much more!
The focus of this publication is virtualization at the disk layer, which is referred to as
block-level virtualization, or the block aggregation layer. A description of file system
virtualization is beyond the scope of this book.
For more information about file system virtualization, see the following resources:
IBM General Parallel File System (GPFS™):
http://www.ibm.com/systems/software/gpfs/
IBM Scale Out Network Attached Storage, which is based on GPFS:
http://www.ibm.com/systems/storage/network/sonas/
The Storage Networking Industry Association’s (SNIA) block aggregation model provides a
useful overview of the storage domain and its layers, as shown in Figure 1-1 on page 3. It
illustrates three layers of a storage domain: the file, block aggregation, and block subsystem
layers.
The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).
The IBM implementation of a block aggregation solution is the IBM SAN Volume Controller
(SVC). The SVC is implemented as a clustered appliance in the storage network layer. For
more information about the reasons why IBM chose to develop its SVC in the storage network
layer, see Chapter 2, “IBM SAN Volume Controller” on page 9.
The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment.
Decoupling means abstracting the physical location of data from the logical representation of
the data. The virtualization engine presents logical entities to the user and internally manages
the process of mapping these entities to the actual location of the physical storage.
The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk up to the
full capacity of a physical disk. A single block of information in this environment is identified by
its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which
is known as a logical block address (LBA).
The term physical disk is used in this context to describe a piece of storage that might be
carved out of a RAID array in the underlying disk subsystem.
Specific to the SVC implementation, the address space that is mapped between the logical
entity is referred to as a volume. The physical disk is referred to as managed disks (MDisks).
The server and application are aware of the logical entities only, and they access these
entities by using a consistent interface that is provided by the virtualization layer.
The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy®, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which allows a user to move or migrate data between
physical locations, which are referred to as storage pools.
The SVC delivers these functions in a homogeneous way on a scalable and highly available
platform over any attached storage and to any attached server.
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.
But how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
“island” that must be managed separately.
Because the SVC provides advanced functions, such as mirroring and FlashCopy, there is no
need to purchase them again for each new disk subsystem.
Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID disk subsystems. The use of the installed raw capacity in the disk
subsystems shows usage numbers of less than 35%, depending on the RAID level that is
used. A block-level virtualization solution, such as IBM SAN Volume Controller, can allow
capacity usage to increase to approximately 75 - 80%.
With the SVC, free space does not need to be maintained and managed within each storage
subsystem, which further increases capacity usage.
The SVC storage engine model DH8 and SVC Small Form Factor (SFF) Expansion
Enclosure Model 24F deliver increased performance, expanded connectivity, compression
acceleration, and additional internal flash storage capacity.
The front view of the two-node cluster based on the 2145-DH8 is shown in Figure 1-3.
The IBM SAN Volume Controller 2145-DH8 ships with preloaded V7.4 software. Downgrading
the software to version 7.2 or lower is not supported. The 2145-DH8 rejects any attempt to
install a version that is lower than 7.3.
1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. The use of storage virtualization as the
foundation for a flexible and reliable storage solution helps enterprises to better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.
The IBM SAN Volume Controller is a mature, eighth-generation virtualization solution that
uses open standards and complies with the SNIA storage model. The SVC is an
appliance-based, in-band block virtualization process in which intelligence (including
advanced storage functions) is migrated from individual storage devices to the storage
network.
SVC can improve the utilization of your storage resources, simplify your storage
management, and improve the availability of your applications.
We present a brief history of the SVC product, and then provide an architectural overview.
After we define SVC terminology, we describe software and hardware concepts and the other
functionalities that are available with the newest release.
Finally, we provide links to websites where you can obtain more information about the SVC.
One goal of this project was to create a system that was almost exclusively composed of
off-the-shelf standard parts. As with any enterprise-level storage control system, it had to
deliver a level of performance and availability that was comparable to the highly optimized
storage controllers of previous generations. The idea of building a storage control system that
is based on a scalable cluster of lower performance servers instead of a monolithic
architecture of two nodes is still a compelling idea.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.
The first documentation that covered this project was released to the public in 2003 in the
form of the IBM Systems Journal, Vol. 42, No. 2, 2003, “The software architecture of a SAN
storage control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales. The article is available
at this website:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853
The results of the COMPASS project defined the fundamentals for the product architecture.
The first release of the IBM System Storage SAN Volume Controller was announced in July
2003.
Each of the following releases brought new and more powerful hardware nodes, which
approximately doubled the I/O performance and throughput of its predecessors, provided new
functionality, and offered more interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).
The most recently released hardware node, the 2145-DH8, is based on an IBM System x
3650 M4 server technology with the following features:
One 2.6 GHz Intel Xeon Processor E5-2650 v2 with eight processor cores (A second
processor is optional.)
Up to 64 GB of cache
Up to three four-port 8 Gbps Fibre Channel (FC) cards
Up to four two-port 16 Gbps FC cards
One four-port 10 Gbps iSCSI/Fibre Channel over Ethernet (FCoE) card
One 12 Gbps serial-attached SCSI (SAS) Expansion card for an additional two SAS
expansions
Three 1 Gbps ports for management and iSCSI host access
One Technican port
Two battery packs
The SVC node can support up to two external Expansion Enclosures for Flash Cards. These
storage spaces can only be used for the IBM Easy Tier function. No host usage of this
storage space is possible.
The following major approaches are used today for the implementation of block-level
aggregation and virtualization:
Symmetric: In-band appliance
Virtualization splits the storage that is presented by the storage systems into smaller
chunks that are known as extents. These extents are then concatenated, by using various
policies, to make virtual disks (volumes). With symmetric virtualization, host systems can
be isolated from the physical storage. Advanced functions, such as data migration, can run
without the need to reconfigure the host. With symmetric virtualization, the virtualization
engine is the central configuration point for the SAN. The virtualization engine directly
controls access to the storage and to the data that is written to the storage. As a result,
locking functions that provide data integrity and advanced functions, such as cache and
Copy Services, can be run in the virtualization engine itself. Therefore, the virtualization
engine is a central point of control for device and advanced function management.
Symmetric virtualization allows you to build a firewall in the storage network. Only the
virtualization engine can grant access through the firewall.
Symmetric virtualization can have disadvantages. The main disadvantage that is
associated with symmetric virtualization is scalability. Scalability can cause poor
performance because all input/output (I/O) must flow through the virtualization engine. To
solve this problem, you can use an n-way cluster of virtualization engines that has failover
capacity. You can scale the additional processor power, cache memory, and adapter
bandwidth to achieve the level of performance that you want. Additional memory and
processing power are needed to run advanced services, such as Copy Services and
caching.
The SVC uses symmetric virtualization. Single virtualization engines, which are known as
nodes, are combined to create clusters. Each cluster can contain between two and eight
nodes.
Asymmetric: Out-of-band or controller-based
With asymmetric virtualization, the virtualization engine is outside the data path and
performs a metadata-style service. The metadata server contains all the mapping and the
locking tables; the storage devices contain only data. In asymmetric virtual storage
networks, the data flow is separated from the control flow. A separate network or SAN link
is used for control purposes. The metadata server contains all the mapping and locking
tables, and the storage devices contain only data. Because the flow of control is separated
from the flow of data, I/O operations can use the full bandwidth of the SAN. A separate
network or SAN link is used for control purposes. However, there are disadvantages to
asymmetric virtualization.
Asymmetric virtualization can have the following disadvantages:
– Data is at risk to increased security exposures, and the control network must be
protected with a firewall.
– Metadata can become complicated when files are distributed across several devices.
– Each host that accesses the SAN must know how to access and interpret the
metadata. Specific device drivers or agent software must therefore be running on each
of these hosts.
– The metadata server cannot run advanced functions, such as caching or Copy
Services, because it only knows about the metadata and not about the data itself.
Logical Entity
(Volume)
SAN
SAN
Virtualization
Virtualization
The controller-based approach has high functionality, but it fails in terms of scalability or
upgradeability. Because of the nature of its design, no true decoupling occurs with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Data migration issues and questions are challenging, such as how to reconnect the servers to
the new controller, and how to reconnect them online without any effect on your applications.
Be aware that with this approach, you not only replace a controller but also implicitly replace
your entire virtualization solution. In addition to replacing the hardware, updating or
repurchasing the licenses for the virtualization feature, advanced copy functions, and so on
might be necessary.
For these reasons, IBM chose the SAN or fabric-based appliance approach for the
implementation of the SVC.
On the SAN storage that is provided by the disk subsystems, the SVC can offer the following
services:
Creates a single pool of storage
Provides logical unit virtualization
Manages logical volumes
Mirrors logical volumes
SAN fabrics can include standard FC, FC over Ethernet, iSCSI over Ethernet, or possible
future types.
Figure 2-2 shows a conceptual diagram of a storage system that uses the SVC. It shows
several hosts that are connected to a SAN fabric or LAN. In practical implementations that
have high-availability requirements (most of the target clients for the SVC), the SAN fabric
“cloud” represents a redundant SAN. A redundant SAN consists of a fault-tolerant
arrangement of two or more counterpart SANs, which provide alternative paths for each
SAN-attached device.
Both scenarios (the use of a single network and the use of two physically separate networks)
are supported for iSCSI-based and LAN-based access networks to the SVC. Redundant
paths to volumes can be provided in both scenarios.
For simplicity, Figure 2-2 shows only one SAN fabric and two zones: host and storage. In a
real environment, it is a preferred practice to use two redundant SAN fabrics. The SVC can be
connected to up to four fabrics. For more information about zoning, see Chapter 3, “Planning
and configuration” on page 73.
A clustered system of SVC nodes that are connected to the same fabric presents logical
disks or volumes to the hosts. These volumes are created from managed LUNs or managed
disks (MDisks) that are presented by the RAID disk subsystems.
Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens
through the SVC nodes. This design is commonly referred to as symmetric virtualization.
LUNs that are not processed by the SVC can still be provided to the hosts.
For iSCSI-based host access, the use of two networks and separating iSCSI traffic within the
networks by using a dedicated virtual local area network (VLAN) path for storage traffic
prevents any IP interface, switch, or target port failure from compromising the host servers’
access to the volumes’ LUNs.
Storage pool (pool) Managed disk (MDisk) group A collection of storage that
identifies an underlying set of
resources. These resources
provide the capacity and
management requirements for
a volume or set of volumes.
For more information about the terms and definitions that are used in the SVC environment,
see Appendix B, “Terminology” on page 889.
The SAN is zoned so that the application servers cannot see the back-end physical storage,
which prevents any possible conflict between the SVC and the application servers that are
trying to manage the back-end storage. The SVC is based on the components that are
described next.
2.4.1 Nodes
Each SVC hardware unit is called a node. The node provides the virtualization for a set of
volumes, cache, and copy services functions. The SVC nodes are deployed in pairs (cluster)
and multiple pairs make up a clustered system or system. A system can consist of 1 - 4 SVC
node pairs.
One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.
Because the nodes are installed in pairs, each node provides a failover function to its partner
node if a node fails.
A specific volume is always presented to a host server by a single I/O Group of the system.
The I/O Group can be changed.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O Group in the system. Also, under normal conditions, the I/Os for
that specific volume are always processed by the same node within the I/O Group. This node
is referred to as the preferred node for this specific volume.
Therefore, in an SVC-based environment, the I/O handling for a volume can switch between
the two nodes of the I/O Group. For this reason, it is mandatory for servers that are connected
through FC to use multipath drivers to handle these failover situations.
The SVC I/O Groups are connected to the SAN so that all application servers that are
accessing volumes from this I/O Group have access to this group. Up to 512 host server
objects can be defined per I/O Group. The host server objects can access volumes that are
provided by this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group within the SVC system;
therefore, they can access volumes from separate I/O Groups. You can move volumes
between I/O Groups to redistribute the load between the I/O Groups. Modifying the I/O Group
that services the volume can be done concurrently with I/O operations if the host supports
nondisruptive volume move. It also requires a rescan at the host level to ensure that the
multipathing driver is notified that the allocation of the preferred node changed and the ports
by which the volume is accessed changed. This modification can be done in the situation
where one pair of nodes becomes overused.
2.4.3 System
The system or clustered system consists of 1 - 4 I/O Groups. Certain configuration limitations
are then set for the individual system. For example, the maximum number of volumes that is
supported per system is 8,192 (having a maximum of 2,048 volumes per I/O Group), or the
maximum managed disk that is supported is 32 PB per system.
All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.
A process is provided to back up the system configuration data onto disk so that it can be
restored if there is a disaster. This method does not back up application data. Only SVC
system configuration information is backed up.
For the purposes of remote data mirroring, two or more systems must form a partnership
before relationships between mirrored volumes are created.
For more information about the maximum configurations that apply to the system, I/O Group,
and nodes, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004924
Note: The site attribute in the node and controller object needs to be set in an enhanced
stretched system.
For more information, see Appendix C, “SAN Volume Controller stretched cluster” on
page 903.
2.4.5 MDisks
The SVC system and its I/O Groups view the storage that is presented to the SAN by the
back-end controllers as a number of disks or LUNs, which are known as managed disks or
MDisks. Because the SVC does not attempt to provide recovery from physical disk failures
within the back-end controllers, an MDisk often is provisioned from a RAID array. However,
the application servers do not see the MDisks at all. Instead, they see a number of logical
disks, which are known as virtual disks or volumes, which are presented by the SVC I/O
Groups through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.
The MDisks are placed into storage pools where they are divided into a number of extents,
which are 16 MB - 8192 MB, as defined by the SVC administrator.
For more information about the total storage capacity that is manageable per system
regarding the selection of extents, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004924#_Extents
A volume is host-accessible storage that was provisioned out of one storage pool; or, if it is a
mirrored volume, out of two storage pools.
Three candidate quorum disks exist. However, only one quorum disk is active at any time. For
more information about quorum disks, see 2.8.1, “Quorum disks” on page 45.
Therefore, a storage tier attribute is assigned to each MDisk, with the default being
generic_hdd. Starting with the SVC V6.1, a new tier 0 (zero) level disk attribute is available for
Flash, and it is known as generic_ssd.
At any point, an MDisk can be a member in one storage pool only, except for image mode
volumes. For more information, see 2.5.1, “Image mode volumes” on page 26.
Figure 2-3 on page 21 shows the relationships of the SVC entities to each other.
Pool_SSDN7 Pool_SSDN8
Storage_Pool_01
Storage_Pool_02
SSD SSD SSD SSD
MD1 MD2 MD3 MD4 MD5 SSD SSD SSD SSD
Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator when the storage pool is created and cannot be changed later.
The size of the extent is 16 MB - 8192 MB.
It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring (2.5.4, “Mirrored
volumes” on page 29) to copy volumes between pools.
The SVC limits the number of extents in a system to 222 = ~4 million. Because the number of
addressable extents is limited, the total capacity of an SVC system depends on the extent
size that is chosen by the SVC administrator. The capacity numbers that are specified in
Table 2-2 on page 22 for an SVC system assume that all defined storage pools were created
with the same extent size.
a
The total capacity values assumes that all of the storage pools in the system use the same
extent size.
For most systems, a capacity of 1 - 2 PB is sufficient. A preferred practice is to use 256 MB for
larger clustered systems. The default extent size is 1,024 MB.
For more information, see IBM System Storage SAN Volume Controller and Storwize V7000
Best Practices and Performance Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
2.4.9 Volumes
Volumes are logical disks that are presented to the host or application servers by the SVC.
The hosts cannot see the MDisks; they can see only the logical volumes that are created from
combining extents from a storage pool.
The striped mode is the best method to use for most cases. However, sequential extent
allocation mode can slightly increase the sequential performance for certain workloads.
Figure 2-4 shows the striped volume mode and sequential volume mode. How the extent
allocation from the storage pool differs also is shown.
You can allocate the extents for a volume in many ways. The process is under full user control
when a volume is created and the allocation can be changed at any time by migrating single
extents of a volume to another MDisk within the storage pool.
For more information about how to create volumes and migrate extents by using the GUI or
command-line interface (CLI), see Chapter 6, “Data migration” on page 241, Chapter 9, “SAN
Volume Controller operations using the command-line interface” on page 493, and
Chapter 10, “SAN Volume Controller operations using the GUI” on page 655.
Next, it creates an extent migration plan that is based on this activity and then dynamically
moves high-activity or hot extents to a higher disk tier within the storage pool. It also moves
extents whose activity dropped off or cooled from the high-tier MDisks back to a lower-tiered
MDisk.
Easy Tier: The Easy Tier function can be turned on or off at the storage pool level and
volume level.
The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes.
The usage statistics file can be offloaded from the SVC nodes. Then, you can use IBM
Storage Advisor Tool (STAT) to create a summary report.
Easy Tier creates a migration report every 24 hours on the number of extents that can be
moved if the pool were a multitiered storage pool. Therefore, although Easy Tier extent
migration is not possible within a single-tier pool, the Easy Tier statistical measurement
function is available.
The usage statistics file can be offloaded from the SVC configuration node by using the GUI
(click Settings → Support). Then, you can use IBM Storage Advisor Tool (STAT) to create
the statistics report. A web browser is used to view the STAT output. For more information
about the STAT utility, see following web page:
https://ibm.biz/BdEzve
For more information about Easy Tier functionality and generating statistics by using IBM
STAT, see Chapter 7, “Advanced features for storage efficiency” on page 361.
2.4.12 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the SVC is a collection of host bus adapter (HBA) worldwide port names
(WWPNs) or iSCSI-qualified names (IQNs) that are defined on the specific server.
Note: iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are
generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume
that is accessed by multiple hosts of a server system.
Node failover can be handled without having a multipath driver that is installed on the iSCSI
server. An iSCSI-attached server can reconnect after a node failover to the original target
IP address, which is now presented by the partner node. To protect the server against link
failures in the network or HBA failures, the use of a multipath driver is mandatory.
Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that
are configured on the host object.
For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination)
adapter. Host objects can have IQNs and WWPNs.
Certain configuration limits exist in the SVC, including the following list of important limits. For
the most current information, see the SVC support site.
Sixteen worldwide node names (WWNNs) per storage subsystem
One PB MDisk
8192 MB extents
Long object names can be up to 63 characters
For more information about these features, see 2.12, “What is new with the SAN Volume
Controller 7.4” on page 69.
Volumes have two major modes: managed mode and image mode. Managed mode volumes
have two policies: the sequential policy and the striped policy. Policies define how the extents
of a volume are allocated from a storage pool.
Image mode provides a one-to-one mapping between the logical block addresses (LBAs)
between a volume and an MDisk. Image mode volumes have a minimum size of one block
(512 bytes) and always occupy at least one extent.
An image mode MDisk is mapped to one, and only one, image mode volume.
The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of creating the image mode volume.
The SVC also supports the reverse process in which a managed mode volume can be
migrated to an image mode volume. If a volume is migrated to another MDisk, it is
represented as being in managed mode during the migration and is only represented as an
image mode volume after it reaches the state where it is a straight-through mapping.
An image mode MDisk is associated with exactly one volume. The last extent is partial (not
filled) if the (image mode) MDisk is not a multiple of the MDisk Group’s extent size. An image
mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and
it does not have any SVC metadata extents that are assigned to it. Managed or image mode
MDisks are always members of a storage pool.
It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). The extent size that is chosen for
this specific storage pool must be the same as the extent size into which you plan to migrate
the data. All of the SVC copy services functions can be applied to image mode disks. See
Figure 2-5 on page 27.
Figure 2-6 on page 28 shows this mapping. It also shows a volume that consists of several
extents that are shown as V0 - V7. Each of these extents is mapped to an extent on one of the
MDisks: A, B, or C. The mapping table stores the details of this indirection.
In Figure 2-6 on page 28, several of the MDisk extents are unused. No volume extent maps to
them. These unused extents are available for use in creating volumes, migration, expansion,
and so on.
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm: if the set of MDisks from which to allocate extents contains more than
one MDisk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no
free extents when its turn arrives, its turn is missed and the round-robin moves to the next
MDisk in the set that has a free extent.
When a volume is created, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by choosing the next disk in a round-robin fashion. The
pseudo-random algorithm avoids the situation where the “striping effect” that is inherent in a
round-robin algorithm places the first extent for many volumes on the same MDisk. Placing
the first extent of a number of volumes on the same MDisk can lead to poor performance for
workloads that place a large I/O load on the first extent of each volume, or that create multiple
sequential streams.
Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (LUNs) that are used as SVC image mode
volumes. Using SVC Copy Services instead of the underlying disk controller copy services
gives better results.
The two copies of the volume often are allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships;
it is serviced by an I/O Group; and it has a preferred node.
Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be 0 or 1.
This feature provides a point-in-time copy functionality that is achieved by “splitting” a copy
from the volume. However, the mirrored volume feature does not address other forms of
mirroring that are based on remote copy, which is sometimes called IBM HyperSwap®, that
mirrors volumes across I/O Groups or clustered systems. It is also not intended to manage
mirroring or remote copy functions in back-end controllers.
A second copy can be added to a volume with a single copy or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”.
The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default “synchronization rate” or at a rate that is defined when the
volume is created or modified. The synchronization status for mirrored volumes is recorded
on the quorum disk.
If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.
If it is known that MDisk space (which is used for creating copies) is already formatted or if the
user does not require read stability, a “no synchronization” option can be selected that
declares the copies as “synchronized” (even when they are not).
To minimize the time that is required to resynchronize a copy that is out of sync, only the
256 KB grains that were written to since the synchronization was lost are copied. This
approach is known as an incremental synchronization. Only the changed grains must be
copied to restore synchronization.
Important: An unmirrored volume can be migrated from one location to another by adding
a second copy to the wanted destination, waiting for the two copies to synchronize, and
then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.
Where there are two copies of a volume, one copy is known as the primary copy. If the
primary is available and synchronized, reads from the volume are directed to it. The user can
select the primary when the volume is created or can change it later.
Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.
Figure 2-8 Data flow for write I/O processing in a mirrored volume in the SVC
As shown in Figure 2-8, all the writes are sent by the host to the preferred node for each
volume (1); then, the data is mirrored to the cache of the partner node in the I/O Group (2),
and acknowledgment of the write operation is sent to the host (3). The preferred node then
destaged the written data to the two volume copies (4).
Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA
For more information about the change, see Chapter 6 of IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while it is reading from one copy, it is repaired by
using data from the other copy. This consistency check is performed asynchronously with
host I/O.
Important: Mirrored volumes can be taken offline if there is no quorum disk available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.
Mirrored volumes use bitmap space at a rate of 1 bit per 256 KB grain, which translates to
1 MB of bitmap space supporting 2 TB of mirrored volumes. The default allocation of bitmap
space is 20 MB, which supports 40 TB of mirrored volumes. If all 512 MB of variable bitmap
space is allocated to mirrored volumes, 1 PB of mirrored volumes can be supported.
The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.
Thin-provisioned volumes can be used as volumes that are assigned to the host, by
FlashCopy to implement thin-provisioned FlashCopy targets, and with the mirrored volumes
feature.
When a thin-provisioned volume is initially created, a small amount of the real capacity is
used for initial metadata. I/Os are written to grains of the thin volume that were not previously
written, which causes grains of the real capacity to be used to store metadata and the actual
user data. I/Os are written to grains that were previously written, which updates the grain
where data was previously written.
The grain size is defined when the volume is created. The grain size can be 32 KB, 64 KB,
128 KB, or 256 KB. The default grain size is 256 KB, which is the recommended option. If you
select 32 KB for the grain size, the volume size cannot exceed 260,000 GB. The grain size
cannot be changed after the thin-provisioned volume is created. Generally, smaller grain sizes
save space, but they require more metadata access, which can adversely affect performance.
If you do not use the thin-provisioned volume as a FlashCopy source or target volume, use
256 KB to maximize performance. If you use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the FlashCopy
function.
The metadata storage overhead is never greater than 0.1% of the user data. The overhead is
independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in
a FlashCopy map, use the same grain size as the map grain size for the best performance. If
you are using the thin-provisioned volume directly with a host system, use a small grain size.
The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity allows a larger amount of data and metadata to be stored on the
volume. Thin-provisioned volumes use the real capacity that is provided in ascending order as
new data is written to the volume. If the user initially assigns too much real capacity to the
volume, the real capacity can be reduced to free storage for other uses.
A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to
automatically add a fixed amount of more real capacity to the thin volume as required.
Therefore, autoexpand attempts to maintain a fixed amount of unused real capacity for the
volume, which is known as the contingency capacity.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and it must expand.
Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated.
To support the auto expansion of thin-provisioned volumes, the storage pools from which they
are allocated have a configurable capacity warning. When the used capacity of the pool
exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% is
specified, the event is logged when 20% of the free capacity remains.
This governing feature can be used to satisfy a quality of service (QoS) requirement or a
contractual obligation (for example, if a client agrees to pay for I/Os performed, but does not
pay for I/Os beyond a certain rate). Only Read, Write, and Verify commands that access the
physical medium are subject to I/O governing.
The governing rate can be set in I/Os per second or MB per second. It can be altered by
changing the throttle value by running the chvdisk command and specifying the -rate
parameter.
I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does
not affect the data copy rate from the primary volume. Governing has no effect on
FlashCopy or data migration I/O rates.
An I/O budget is expressed as a number of I/Os (or MBs) over a minute. The budget is evenly
divided among all SVC nodes that service that volume, which means among the nodes that
form the I/O Group of which that volume is a member.
The algorithm operates two levels of policing. While a volume on each SVC node receives I/O
at a rate lower than the governed level, no governing is performed. However, when the I/O
rate exceeds the defined threshold, the policy is adjusted. A check is made every minute to
see that each node is receiving I/O below the threshold level. Whenever this check shows that
the host exceeded its limit on one or more nodes, policing begins for new I/Os.
This algorithm might cause I/O to backlog in the front end, which might eventually cause a
Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a
host stays within its 1-second budget on all nodes in the I/O Group for 1 minute, the policing is
relaxed and monitoring takes place over the 1-minute period as before.
The iSCSI function is a software function that is provided by the SVC code, not hardware.
A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) starts read or write requests for data from a target server (for example, a data
storage system). Commands, which are sent by the client and processed by the server, are
put into the Command Descriptor Block (CDB). The server runs a command and completion
is indicated by a special signal alert.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.
The following concepts of names and addresses are carefully separated in iSCSI:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
initiator name and target name also refer to an iSCSI name.
An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of
that node. The address consists of a host name or IP address, a TCP port number (for the
target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique IQN, which be can up to
255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes.
The iSCSI qualified name format is defined in RFC3720 and contains (in order) the following
elements:
The string iqn
A date code that specifies the year and month in which the organization registered the
domain or subdomain name that is used as the naming authority string
The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name
Optional: A colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique
For the SVC, the IQN for its iSCSI target is specified as shown in the following example:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
On a Microsoft Windows server, the IQN (that is, the name for the iSCSI initiator), can be
defined as shown in the following example:
iqn.1991-05.com.microsoft:<computer name>
The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An
alias can be assigned to an initiator or a target. The alias is independent of the name and
does not have to be unique. Because it is not unique, the alias must be used in a purely
informational way. It cannot be used to specify a target at login or during authentication.
Targets and initiators can have aliases.
Caution: Before you change system or node names for an SVC system that has servers
that are connected to it by way of iSCSI, be aware that because the system and node
name are part of the SVC’s IQN, you can lose access to your data by changing these
names. The SVC GUI displays a warning, but the CLI does not display a warning.
The iSCSI session, which consists of a login phase and a full feature phase, is completed with
a special command.
The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.
If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.
When the login is confirmed, the iSCSI session enters the full feature phase. If more than one
TCP connection was established, iSCSI requires that each command and response pair must
go through one TCP connection. Therefore, each separate read or write command is carried
out without the necessity to trace each request for passing separate flows. However, separate
transactions can be delivered through separate TCP connections within one session.
Figure 2-11 shows an overview of the various block-level storage protocols and the position of
the iSCSI layer.
SVC nodes have up to six Ethernet ports. These ports are for 1 Gbps support or with the
optional Ethernet Card10 Gbps support. System management is possible only over the 1
Gbps ports.
Figure 2-12 shows an overview of the IP addresses on an SVC node port and how these IP
addresses are moved between the nodes of an I/O Group.
The management IP addresses and the iSCSI target IP addresses fail over to the partner
node N2 if node N1 fails (and vice versa). The iSCSI target IPs fail back to their corresponding
ports on node N1 when node N1 is running again.
It is a preferred practice to keep all of the eth0 ports on all of the nodes in the system on the
same subnet. The same practice applies for the eth1 ports; however, it can be a separate
subnet to the eth0 ports.
You can configure a maximum of 256 iSCSI hosts per I/O Group per SVC because of IQN
limits.
A CHAP secret can be assigned to each SVC host object. The host must then use CHAP
authentication to begin a communications session with a node in the system. A CHAP secret
can also be assigned to the system.
Volumes are mapped to hosts, and LUN masking is applied by using the same methods that
are used for FC LUNs.
Because iSCSI can be used in networks where data security is a concern, the specification
allows for separate security methods. For example, you can set up security through a method,
such as IPSec, which is not apparent for higher levels, such as iSCSI, because it is
implemented at the IP level. For more information about securing iSCSI, see Securing Block
Storage Protocols over IP, RFC3723, which is available at this website:
http://tools.ietf.org/html/rfc3723
SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the
key difference) the host is reconnected to the same IP target that reappears after a short
period and its volumes continue to be available for I/O. iSCSI allows failover without host
multipathing. To achieve this failover without host multipathing, the partner node in the I/O
Group takes over the port IP addresses and iSCSI names of a failed node.
A host multipathing driver for iSCSI is required if you want the following capabilities:
Protecting a server from network link failures
Protecting a server from network failures if the server is connected through two separate
networks
Providing load balancing on the server’s network links
Copy services functions are implemented within an SVC system (FlashCopy and image mode
migration) or between the SVC or SVC and Storwize systems (Metro Mirror and Global
Mirror). To use Metro Mirror and Global Mirror functions, you must have the remote copy
license installed on each side.
You can create partnerships with the SVC and Storwize systems to allow Metro Mirror and
Global Mirror to operate between the two systems. To create these partnerships, both
clustered systems must be at version 6.3.0 or later.
Figure 2-13 shows an example of the layers in an SVC and Storwize clustered-system
partnership.
Within the SVC, both intracluster copy services functions (FlashCopy and image mode
migration) operate at the block level. Intercluster functions (Global Mirror and Metro Mirror)
operate at the volume layer. A volume is the container that is used to present storage to host
systems. Operating at this layer allows the Advanced Copy Services functions to benefit from
caching at the volume layer and helps facilitate the asynchronous functions of Global Mirror
and lessen the effect of synchronous Metro Mirror.
Operating at the volume layer also allows Advanced Copy Services functions to operate
above and independently of the function or characteristics of the underlying disk subsystems
that are used to provide storage resources to an SVC system. Therefore, if the physical
storage is virtualized with an SVC or Storwize and the backing array is supported by the SVC
or Storwize, you can use disparate backing storage.
FlashCopy: Although FlashCopy operates at the block level, this level is the block level of
the SVC, so the physical backing storage can be anything that the SVC supports. However,
performance is limited to the slowest performing storage that is involved in FlashCopy.
Synchronous remote copy ensures that updates are physically committed (not in volume
cache) in both the primary and the secondary SVC clustered systems before the application
considers the updates complete. Therefore, the secondary SVC clustered system is fully
up-to-date if it is needed in a failover.
However, the application is fully exposed to the latency and bandwidth limitations of the
communication link to the secondary system. In a truly remote situation, this extra latency can
have a significantly adverse effect on application performance; therefore, a limitation of 300
kilometers (~186 miles) exists on the distance of Metro Mirror. This distance induces latency
of approximately 5 microseconds per kilometer, which does not include the latency that is
added by the equipment in the path.
The nature of synchronous remote copy is that latency for the distance and the equipment in
the path is added directly to your application I/O response times. The overall latency for a
complete round trip must not exceed 80 milliseconds.
With version 7.4, the round-trip time was improved to 250 ms. Figure 2-14 shows a list of the
round-trip times.
Special configuration guidelines for SAN fabrics are used for data replication. The distance
and available bandwidth of the intersite links must be considered. For more information about
these guidelines, see the SVC Support Portal, which is available at this website:
https://ibm.biz/BdEzB5
For more information about the SVC’s synchronous mirroring, see Chapter 8, “Advanced
Copy Services” on page 405.
In asynchronous remote copy, the application is provided acknowledgment that the write is
complete before the write is committed (written to backing storage) at the secondary site.
Therefore, on a failover, certain updates (data) might be missing at the secondary site.
The application must have an external mechanism for recovering the missing updates or
recovering to a consistent point (which is usually a few minutes in the past). This mechanism
can involve user intervention, but in most practical scenarios, it must be at least partially
automated.
The application must then be started and a recovery procedure to either a consistent point in
time or recovery of the missing updates must be performed. For this reason, the initial state of
Global Mirror targets is called crash consistent. This term might sound daunting, but it merely
means that the data on the volumes appears to be in the same state as though an application
crash occurred.
In asynchronous remote copy with cycling mode (Change Volumes), changes are tracked and
copied to intermediate Change Volumes where needed. Changes are transmitted to the
secondary site periodically. The secondary volumes are much further behind the primary
volume, and more data must be recovered if there is a failover. Because the data transfer can
be smoothed over a longer time period, however, lower bandwidth is required to provide an
effective solution.
Because most applications, such as databases, have mechanisms for dealing with this type of
data state for a long time, it is a fairly mundane operation (depending on the application). After
this application recovery procedure is finished, the application starts normally.
RPO: When you are planning your Recovery Point Objective (RPO), you must account for
application recovery procedures, the length of time that they take, and the point to which
the recovery procedures can roll back data.
Although Global Mirror on an SVC can provide typically subsecond RPO times, the
effective RPO time can be up to 5 minutes or longer, depending on the application
behavior.
Most clients aim to automate the failover or recovery of the remote copy through failover
management software. The SVC provides Simple Network Management Protocol (SNMP)
traps and interfaces to enable this automation. IBM Support for automation is provided by IBM
Tivoli Storage Productivity Center.
The Tivoli documentation is available at the IBM Tivoli Storage Productivity Center
Knowledge Center at this website:
https://ibm.biz/BdEzdX
2.7.2 FlashCopy
FlashCopy is the IBM branded name for Point-in-Time, which is sometimes called Time-Zero,
or T0 copy. This function makes a copy of the blocks on a source volume and can duplicate
them on 1 - 256 target volumes.
FlashCopy: When the multiple target capability of FlashCopy is used, if any other copy (C)
is started while an existing copy is in progress (B), C has a dependency on B. Therefore, if
you end B, C becomes invalid.
FlashCopy works by creating one or two (for incremental operations) bitmaps to track
changes to the data on the source volume. This bitmap is also used to present an image of
the source data at the point that the copy was taken to target hosts while the actual data is
being copied. This capability ensures that copies appear to be instantaneous.
If your FlashCopy targets have existing content, the content is overwritten during the copy
operation. Also, the “no copy” (copy rate 0) option, in which only changed data is copied,
overwrites existing content. After the copy operation starts, the target volume appears to have
the contents of the source volume as it existed at the point that the copy was started.
Although the physical copy of the data takes an amount of time that varies based on system
activity and configuration, the resulting data at the target appears as though the copy was
made instantaneously.
The SVC also permits source and target volumes for FlashCopy to be thin-provisioned
volumes. FlashCopies to or from thinly provisioned volumes allow the duplication of data
while using less space. These types of volumes depend on the rate of change of the data.
Typically, these types of volumes are used in situations where time is limited. Over time, they
might fill the physical space that they were allocated. Reverse FlashCopy enables target
volumes to become restore points for the source volume without breaking the FlashCopy
relationship and without having to wait for the original copy operation to complete. The SVC
supports multiple targets and therefore multiple rollback points.
In most practical scenarios, the FlashCopy functionality of the SVC is integrated into a
process or procedure that allows the benefits of the point-in-time copies to be used to
address business needs. IBM offers Tivoli Storage FlashCopy Manager for this functionality.
For more information about Tivoli Storage FlashCopy Manager, see this website:
http://www.ibm.com/software/products/en/tivoli-storage-flashcopy-manager
Most clients aim to integrate the FlashCopy feature for point-in-time copies and quick
recovery of their applications and databases.
Image mode migration works by establishing a one-to-one static mapping of volumes and
MDisks. This mapping allows the data on the MDisk to be presented directly through the
volume layer and allows the data to be moved between volumes and the associated backing
MDisks. This function provides a facility to use the SVC as a migration tool. Otherwise, you
have no recourse, such as migrating from Vendor A hardware to Vendor B hardware,
assuming that the two systems have no other compatibility.
Volume mirroring migration is a clever use of the facility that the SVC offers to mirror data on
a volume between two sets of storage pools. As with the logical volume management portion
of certain operating systems, the SVC can mirror data transparently between two sets of
physical hardware. You can use this feature to move data between MDisk groups with no host
I/O interruption by removing the original copy after the mirroring is completed. This feature is
much more limited than FlashCopy and must not be used where FlashCopy is appropriate.
Careful planning: When you are migrating by using the volume mirroring migration, your
I/O rate is limited to the slowest of the two MDisk groups that are involved. Therefore,
planning carefully to avoid affecting the live systems is imperative.
Resources on the clustered system act as highly available versions of unclustered resources.
If a node (an individual computer) in the system is unavailable or too busy to respond to a
request for a resource, the request is passed transparently to another node that can process
the request. The clients are unaware of the exact locations of the resources that they use.
The SVC is a collection of up to eight nodes, which are added in pairs that are known as I/O
Groups. These nodes are managed as a set (system), and they present a single point of
control to the administrator for configuration and service activity.
The eight-node limit for an SVC system is a limitation that is imposed by the microcode and
not a limit of the underlying architecture. Larger system configurations might be available in
the future.
Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system
feature is not based on Linux clustering code. The clustered system software within the SVC,
that is, the event manager cluster framework, is based on the outcome of the COMPASS
research project. It is the key element that isolates the SVC application from the underlying
hardware nodes. The clustered system software makes the code portable. It provides the
means to keep the single instances of the SVC code that are running on separate systems’
nodes in sync. Therefore, restarting nodes during a code upgrade, adding new nodes, or
removing old nodes from a system or failing nodes cannot affect the SVC’s availability.
All active nodes of a system must know that they are members of the system, especially in
situations where it is key to have a solid mechanism to decide which nodes form the active
system, such as the split-brain scenario where single nodes lose contact with other nodes. A
worst case scenario is a system that splits into two separate systems.
Within an SVC system, the voting set and a quorum disk are responsible for the integrity of
the system. If nodes are added to a system, they are added to the voting set. If nodes are
removed, they are removed quickly from the voting set. Over time, the voting set and the
nodes in the system can completely change so that the system migrates onto a separate set
of nodes from the set on which it started.
The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the
system can continue to operate, it adjusts the quorum requirement so that further node failure
can be tolerated.
If a tiebreaker condition occurs, the one-half portion of the system nodes, which can reserve
the quorum disk after the split occurred, locks the disk and continues to operate. The other
half stops its operation. This design prevents both sides from becoming inconsistent with
each other.
When MDisks are added to the SVC system, the SVC system checks the MDisk to see
whether it can be used as a quorum disk. If the MDisk fulfills the requirements, the SVC
assigns the first three MDisks that are added to the system as quorum candidates. One of
these MDisks is selected as the active quorum disk.
Quorum disk placement: If possible, the SVC places the quorum candidates on separate
disk subsystems. However, after the quorum disk is selected, no attempt is made to ensure
that the other quorum candidates are presented through separate disk subsystems.
You can list the quorum disk candidates and the active quorum disk in a system by using the
lsquorum command.
For disaster recovery purposes, a system must be regarded as a single entity, so the system
and the quorum disk must be colocated.
Special considerations are required for the placement of the active quorum disk for a
stretched or split cluster and split I/O Group configurations. For more information, see this
website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
Important: Running an SVC system without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata prevents any migration
operation (including a forced MDisk delete).
Mirrored volumes can be taken offline if no quorum disk is available. This behavior occurs
because the synchronization status for mirrored volumes is recorded on the quorum disk.
During the normal operation of the system, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a
node fails for any reason, the workload that is intended for the node is taken over by another
node until the failed node is restarted and readmitted into the system (which happens
automatically).
If the microcode on a node becomes corrupted, which results in a failure, the workload is
transferred to another node. The code on the failed node is repaired, and the node is
readmitted into the system (which is an automatic process).
2.8.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive experience seek time and latency time at the drive level, which can result
in 1 ms - 10 ms of response time (for an enterprise-class disk).
Cache is allocated in 4 KB segments. A segment holds part of one track. A track is the unit of
locking and destaging granularity in the cache. The cache virtual track size is 32 KB (eight
segments). A track might be only partially populated with valid pages. The SVC coalesces
writes up to a 256 KB track size if the writes are in the same tracks before destage. For
example, if 4 KB is written into a track, another 4 KB is written to another location in the same
track. Therefore, the blocks that are written from the SVC to the disk subsystem can be any
size between 512 bytes up to 256 KB. The large cache and advanced cache management
algorithms within the SVC 2145-DH8 allow it to improve on the performance of many types of
underlying disk technologies. The SVC’s capability to manage, in the background, the
destaging operations that are incurred by writes (in addition to still supporting full data
integrity) assists with SVC’s capability in achieving good database performance.
Many changes were made to the way that SVC uses its cache in the 7.3 code level. The
cache is separated into two layers: an upper cache and a lower cache.
Figure 2-15 shows the separation of the upper and lower cache.
Lower Cache –
algorithm intelligence
Combined, the two levels of cache also deliver the following functionality:
Pins data when the LUN goes offline
Provides enhanced statistics for Tivoli Storage Productivity Center and maintains
compatibility with an earlier version
Provides trace for debugging
Reports medium errors
Resynchronizes cache correctly and provides the atomic write functionality
Ensures that other partitions continue operation when one partition becomes 100% full of
pinned data
Supports fast-write (two-way and one-way), flush-through, and write-through
Integrates with T3 recovery procedures
Supports two-way operation
Supports none, read-only, and read/write as user-exposed caching policies
Supports flush-when-idle
Supports expanding cache as more memory becomes available to the platform
Supports credit throttling to avoid I/O skew and offer fairness/balanced I/O between the
two nodes of the I/O Group
Enables switching of the preferred node without needing to move volumes between I/O
Groups
Depending on the size, age, and technology level of the disk storage system, the total
available cache in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in either the SVC or the
disk controller level of the overall system, the system as a whole can take advantage of the
larger amount of cache wherever the cache is located. Therefore, if the storage controller
level of the cache has the greater capacity, expect hits to this cache to occur, in addition to
hits in the SVC cache.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on both the underlying storage technology and the degree to which the workload
exhibits hotspots or sensitivity to cache size or cache algorithms.
The GUI and a web server are installed in the SVC system nodes. Therefore, any browser
can access the management GUI if the browser is pointed at the system IP address.
Management console
The management console for the SVC is referred to as the IBM System Storage Productivity
Center (SSPC). This appliance is no longer needed. The SVC can be reached through the
internal management GUI.
Users that are authenticated by an LDAP server can log in to the SVC web-based GUI and
the CLI. Unlike remote authentication through Tivoli Integrated Portal, users do not need to be
configured locally for CLI access. An SSH key is not required for CLI login in this scenario,
either. However, locally administered users can coexist with remote authentication enabled.
The default administrative user that uses the name superuser must be a local user. The
superuser cannot be deleted or manipulated, except for the password and SSH key.
A user that is authenticated remotely by an LDAP server is granted permissions on the SVC
according to the role that is assigned to the group of which it is a member. That is, any SVC
user group with its assigned role, for example, CopyOperator, must exist with an identical
name on the SVC system and on the LDAP server, if users in that role are to be authenticated
remotely.
In the following example, we demonstrate LDAP user authentication that uses a Microsoft
Windows Server 2008 R2 domain controller that is acting as an LDAP server.
4. You must configure the following parameters in the Configure Remote Authentication
window, as shown in Figure 2-18 and Figure 2-19 on page 52:
– For LDAP Type, select Microsoft Active Directory. (For an OpenLDAP server, select
Other for the type of LDAP server.)
– For Security, choose None. (If your LDAP server requires a secure connection, select
Transport Layer Security; the LDAP server’s certificate is configured later.)
– Click Advanced Settings to expand the bottom part of the window. Leave the User
Name and Password fields empty if your LDAP server supports anonymous bind. For
our MS AD server, we enter the credentials of an existing user on the LDAP server with
permission to query the LDAP directory. You can enter this information in the format of
an email address, for example, administrator@itso.corp, or in the distinguished
format, for example, cn=Administrator,cn=users,dc=itso,dc=corp. Note the common
name portion cn=users for MS AD servers.
– If your LDAP server uses separate attributes from the predefined attributes, you can
edit them here. You do not need to edit the attributes when MS AD is used as the LDAP
service.
5. Figure 2-20 shows the Configure Remote Authentication window, where we configure the
following LDAP server details:
– Enter the IP address of at least one LDAP server.
– Although it is marked as optional, it might be required to enter a Base DN in the
distinguished name format, which defines the starting point in the directory at which to
search for users, for example, dc=itso,dc=corp.
– You can add more LDAP servers by clicking the plus (+) icon.
– Check Preferred if you want to use preferred LDAP servers.
– Click Finish to save the settings.
Now that the SVC for Remote Authentication is enabled and configured, we work with the
user groups. For remote authentication through LDAP, no local SVC users are maintained, but
the user groups must be set up correctly. The existing built-in SVC user groups can be used
and groups that are created in SVC user management can be used. However, the use of
self-defined groups might be advisable to avoid the SVC default groups from interfering with
the existing group names on the LDAP server. Any user group, whether built-in or
self-defined, must be enabled for remote authentication.
2. In the Create User Group window that is shown in Figure 2-21, complete the following
steps:
a. Enter a meaningful Group Name (for example, SVC_LDAP_CopyOperator), according to
its intended role.
b. Select the Role that you want to use by clicking Copy Operator.
c. To mark LDAP for Remote Authentication, select Enable for this group and then click
Create.
You can modify these settings in a group’s properties at any time.
Next, we complete the following steps to create a group with the same name on the LDAP
server, that is, in the Active Directory Domain:
1. On the Domain Controller, start the Active Directory Users and Computers management
console and browse your domain structure to the entity that contains the user groups.
Click the Create new user group icon as highlighted in Figure 2-22 on page 54 to create
a group.
2. Enter the same name, SVC_LDAP_CopyOperator, in the Group Name field, as shown in
Figure 2-23. (The name is case sensitive.) Select the correct Group scope for your
environment and select Security for Group type. Click OK.
3. Edit the user’s properties so that the user can log in to the SVC. Make the user a member
of the appropriate user group for the intended SVC role, as shown in Figure 2-24 on
page 55, and click OK to save and apply the settings.
We are now ready to authenticate the users for the SVC through the remote server. To ensure
that everything works correctly, we complete the following steps to run a few tests to verify the
communication between the SVC and the configured LDAP service:
1. Select Settings → Security, and then select Global Actions → Test LDAP
Connections, as shown in Figure 2-25.
2. We test a real user authentication attempt. Select Settings → Security, then select
Global Actions → Test LDAP Authentication, as shown in Figure 2-27.
3. As shown in Figure 2-28, enter the User Credentials of a user that was defined on the
LDAP server, and then click Test.
Both the LDAP connection test and the LDAP authentication test must complete successfully
to ensure that the LDAP authentication works correctly. In our example, an error message
points to user authentication problems during the LDAP authentication test. It might help to
analyze the LDAP server’s response outside of the SVC. You can use any native LDAP query
tool, for example, the no-charge software LDAPBrowser tool, which is available at this
website:
http://www.ldapbrowser.com/
For a pure MS AD environment, you can use the Microsoft Sysinternals ADExplorer tool,
which is available at this website:
http://technet.microsoft.com/en-us/sysinternals/bb963907
Assuming that the LDAP connection and the authentication test succeeded, users can log in
to the SVC GUI and CLI by using their network credentials, for example, their Microsoft
Windows domain user name and password.
Figure 2-29 shows the web GUI login window with the Windows domain credentials entered.
A user can log in with their short name (that is, without the domain component) or with the
fully qualified user name in the form of an email address.
After a successful login, the user name is displayed in a welcome message in the upper-right
corner of the window, as highlighted in Figure 2-30 on page 58.
Logging in by using the CLI is possible with the short user name or the fully qualified name.
The lscurrentuser CLI command displays the user name of the currently logged in user and
their role.
Forbidden characters are the single quotation mark (‘), colon (:), percent symbol (%),
asterisk (*), comma (,), and double quotation marks (“).
Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden
characters; however, passwords cannot begin or end with blanks.
To register an SSH key for the superuser to provide command-line access, select Service
Assistant → Configure CLI Access to assign a temporary key. However, the key is lost
during a node restart. The permanent way to add the key is through the normal GUI; select
User Management → superuser → Properties to register the SSH key for the superuser.
The superuser is always a member of user group 0, which has the most privileged role within
the SVC.
User groups are used for local and remote authentication. Because the SVC knows of five
roles, by default, five user groups are defined in an SVC system, as shown in Table 2-3.
0 SecurityAdmin SecurityAdmin
1 Administrator Administrator
2 CopyOperator CopyOperator
3 Service Service
4 Monitor Monitor
The access rights for a user who belongs to a specific user group are defined by the role that
is assigned to the user group. It is the role that defines what a user can or cannot do on an
SVC system.
Table 2-4 on page 60 shows the roles ordered (from the top) by the least privileged Monitor
role down to the most privileged SecurityAdmin role. The NasSystem role has no special user
group.
Service All commands that are allowed for the Monitor role and applysoftware,
setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk,
includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,
startstats, stopstats, and setsystemtime
CopyOperator All commands allowed for the Monitor role and prestartfcconsistgrp,
startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp,
switchrcconsistgrp, chrcconsistgrp, startrcrelationship,
stoprcrelationship, switchrcrelationship, chrcrelationship, and
chpartnership
SecurityAdmin All commands, except those commands that are allowed by the NasSystem
role
Local users: Local users are created for each SVC system. Each user has a name, which
must be unique across all users in one system.
If you want to allow access for a user on multiple systems, you must define the user in each
system with the same name and the same privileges.
A local user always belongs to only one user group. Figure 2-31 on page 61 shows an
overview of local authentication within the SVC.
Remote users must be defined only in the SVC system if command-line access is required.
No local user is required for GUI-only remote access. For command-line access, the remote
authentication flag must be set and its password must be defined for the user. For users that
require CLI access with remote authentication, the password must be defined locally for the
users.
Remote users cannot belong to any user group because the remote authentication service,
for example, an LDAP directory server, such as IBM Tivoli Directory Server or Microsoft
Active Directory, delivers the user group information.
The authentication service that is supported by the SVC is the Tivoli Embedded Security
Services server component level 6.2.
The Tivoli Embedded Security Services server provides the following key features:
Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in
use, which means that the SVC communicates only with Tivoli Embedded Security
Services to get its authentication information. The type of protocol that is used to access
the central directory or the kind of the directory system that is used is not apparent to the
SVC.
Tivoli Embedded Security Services provides a secure token facility that is used to enable
single sign-on (SSO). SSO means that users do not have to log in multiple times when
they are using what appears to them to be a single system. SSO is used within Tivoli
Productivity Center. When the SVC access is started from within Tivoli Productivity
Center, the user does not have to log in to the SVC because the user logged in to Tivoli
Productivity Center.
Note: Failure to follow this step can lead to poor interactive performance of the SVC
user interface or incorrect user-role assignments.
Also, Tivoli Storage Productivity Center uses the Tivoli Integrated Portal infrastructure and its
underlying IBM WebSphere® Application Server capabilities to use an LDAP registry and
enable SSO.
For more information about implementing SSO within Tivoli Storage Productivity Center 4.2,
see the chapter about LDAP authentication support and SSO in IBM Tivoli Storage
Productivity Center V4.2 Release Guide, SG24-7894, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247894.html?Open
The new SVC 2145-DH8 Storage Engine has the following key hardware features:
One or two Intel Xeon E5 v2 Series eight-core processors, each with 32 GB memory
16 Gb FC, 8 Gb FC, 10 Gb Ethernet, and 1 Gb Ethernet I/O ports for FC, iSCSI, and Fibre
Channel over Ethernet (FCoE) connectivity
Optional feature: Hardware-assisted compression acceleration
Optional feature: 12 Gb SAS expansion enclosure attachment for internal flash storage)
Model 2145-DH8 includes three 1 Gb Ethernet ports standard for iSCSI connectivity. Model
2145-DH8 can be configured with up to four I/O adapter features that provide up to eight
16 Gb FC ports, up to twelve 8 Gb FC ports, or up to four 10 Gb Ethernet (iSCSI/Fibre
Channel over Ethernet (FCoE)) ports. For more information, see the optional feature section
in the knowledge center:
https://ibm.biz/BdEPQ6
Real-time Compression workloads can benefit from Model 2145-DH8 configurations with two
eight-core processors with 64 GB of memory (total system memory). Compression workloads
can also benefit from the hardware-assisted acceleration that is offered by the addition of up
to two compression accelerator cards. The SVC Storage Engines can be clustered to help
deliver greater performance, bandwidth, and scalability. An SVC clustered system can contain
up to four node pairs or I/O Groups. Model 2145-DH8 storage engines can be added into
existing SVC clustered systems that include previous generation storage engine models.
For more information, see IBM SAN Volume Controller Software Installation and
Configuration Guide, GC27-2286.
For more information about integration into existing clustered systems, compatibility, and
interoperability with installed nodes and uninterruptible power supplies, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1002999
Figure 2-33 shows the front-side view of the SVC 2145-DH8 node.
The actual port speed for each of the ports can be displayed through the GUI, CLI, the node’s
front panel, and by light-emitting diodes (LEDs) that are placed at the rear of the node.
For more information, see SAN Volume Controller Model 2145-DH8 Hardware Installation
Guide, GC27-6490. The PDF is at this website:
https://ibm.biz/BdEzM7
The SVC imposes no limit on the FC optical distance between SVC nodes and host servers.
FC standards, with small form-factor pluggable optics (SFP) capabilities and cable type,
dictate the maximum FC distances that are supported.
If longwave SFPs are used in the SVC nodes, the longest supported FC link between the
SVC and switch is 40 km (24.85 miles).
Table 2-5 shows the cable length that is supported by shortwave SFPs.
2 Gbps FC 150 m (492.1 ft) 300 m (984.3 ft) 500 m (1640.5 ft) N/A
4 Gbps FC 70 m (229.7 ft) 150 m (492.1 ft) 380 m (1246.9 ft) 400 m (1312.34 ft)
8 Gbps FC limiting 20 m (68.10 ft) 50 m (164 ft) 150 m (492.1 ft) 190 m (623.36 ft)
16 Gbps FC 15 m (49.21 ft) 35 m (114.82 ft) 100 m (382.08 ft) 125 m (410.10 ft)
Table 2-6 shows the applicable rules that relate to the number of inter-switch link (ISL) hops
that is allowed in a SAN fabric between the SVC nodes or the system.
0 0 1 Maximum 3
(Connect to the same (Connect to the same (Recommended: 0,
switch.) switch.) connect to the same
switch.)
Support for iSCSI introduces one other IPv4 and one other IPv6 address for each SVC node
port. These IP addresses are independent of the system configuration IP addresses. An IP
address overview is shown in Figure 2-12 on page 37.
If you have SVC with the 10 Gbit features, FCoE support is added with an upgrade to version
6.4. The same 10 Gbit ports are iSCSI and FCoE capable. For performance, the FCoE ports
compare (regarding transport speed) with the native Fibre Channel ports (8 Gbit versus 10
Gbit), and recent enhancements to the iSCSI support mean that performance levels are
similar to iSCSI and Fibre Channel performance levels.
The actual times that are shown are not that important, but a dramatic difference exists
between accessing data that is in cache and data that is on an external disk.
We added a second scale to Figure 2-34, which gives you an idea of how long it takes to
access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you
an idea of the importance of future storage technologies closing or reducing the gap between
access times for data that is stored in cache/memory versus access times for data that is
stored on an external medium.
Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they showed
remarkable performance regarding capacity growth, form factor, and size reduction, price
decrease (cost per GB), and reliability.
However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O did not improve at the same rate, although they certainly did improve. In
actual environments, we can expect from today’s enterprise-class FC serial-attached SCSI
(SAS) disk up to 200 IOPS per disk with an average response time (a latency) of
approximately 6 ms per I/O.
Nearline - SAS 90
Today’s rotating disks continue to advance in capacity (several TBs), form factor/footprint
(8.89 cm (3.5 inches), 6.35 cm (2.5 inches), and 4.57 cm (1.8 inches)), and price (cost per
GB), but they are not getting much faster.
Enterprise-class Flash Drives typically deliver 85,000 read and 36,000 write IOPS with typical
latencies of 50 µs for reads and 800 µs for writes. Their form factors of 6.35 cm
(2.5 inches)/8.89 cm (3.5 inches) and their interfaces (FC/SAS/SATA) make them easy to
integrate into existing disk shelves.
Today’s Flash Drive technology is only a first step into the world of high-performance
persistent semiconductor storage. A group of the approximately 10 most promising
technologies is collectively referred to as Storage Class Memory (SCM).
For a comprehensive overview of the Flash Drive technology in a subset of the well-known
Storage Networking Industry Association (SNIA) Technical Tutorials, see these websites:
http://www.snia.org/education/tutorials/2010/spring#solid
http://www.snia.org/education/tutorials/fms
When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.
Internal Flash Drives can be configured in the following two RAID levels:
RAID 1 - RAID 10: In this configuration, one half of the mirror is in each node of the I/O
Group, which provides redundancy if a node failure occurs.
RAID 0: In this configuration, all the drives are assigned to the same node. This
configuration is intended to be used with Volume Mirroring because no redundancy is
provided if a node failure occurs.
The Flash MDisks can then be placed into a single Flash Drive tier storage pool.
High-workload volumes can be manually selected and placed into the pool to gain the
performance benefits of Flash Drives.
For a more effective use of Flash Drives, place the Flash Drive MDisks into a multitiered
storage pool that is combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier
turned on, Easy Tier automatically detects and migrates high-workload extents onto the
solid-state MDisks.
For more information about IBM Flash Storage, see this website:
http://www.ibm.com/systems/storage/flash/
2.12.1 SAN Volume Controller 7.4 supported hardware list, device driver, and
firmware levels
With the SVC 7.4 release (as in every release), IBM offers functional enhancements and new
hardware that can be integrated into existing or new SVC systems and interoperability
enhancements or new support for servers, SAN switches, and disk subsystems. For the most
current information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
We also review the implications for your storage network and describe performance
considerations.
Important: At the time of writing, the statements we make are correct, but they might
change. Always verify any statements that are made in this book with the SAN Volume
Controller supported hardware list, device driver, firmware, and recommended software
levels that are available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
To achieve the most benefit from the SVC, preinstallation planning must include several
important steps. These steps ensure that the SVC provides the best possible performance,
reliability, and ease of management for your application needs. The correct configuration also
helps minimize downtime by avoiding changes to the SVC and the storage area network
(SAN) environment to meet future growth needs.
Note: For more information, see the Pre-sale Technical and Delivery Assessment (TDA)
document that is available at this website:
https://www.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/salib_SA572/lc=
en_ALL_ZZ
A pre-sale TDA needs to be conducted before a final proposal is submitted to a client and
must be conducted before an order is placed to ensure that the configuration is correct and
the solution that is proposed is valid. The preinstall System Assurance Planning Review
(SAPR) Package includes various files that are used in preparation for an SVC preinstall
TDA. A preinstall TDA needs to be conducted shortly after the order is placed and before
the equipment arrives at the client’s location to ensure that the client’s site is ready for the
delivery and responsibilities are documented regarding the client and IBM or the IBM
Business Partner roles in the implementation.
Tip: For more information about the topics that are described, see the following resources:
IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551
SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521,
which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
Complete the following tasks when you are planning for the SVC:
Collect and document the number of hosts (application servers) to attach to the SVC, the
traffic profile activity (read or write, sequential, or random), and the performance
requirements, which are I/O per second (IOPS).
Collect and document the following storage requirements and capacities:
– The total back-end storage that is present in the environment to be provisioned on the
SVC
– The total back-end new storage to be provisioned on the SVC
– The required virtual storage capacity that is used as a fully managed virtual disk
(volume) and used as a Space-Efficient (SE) volume
– The required storage capacity for local mirror copy (volume mirroring)
– The required storage capacity for point-in-time copy (FlashCopy)
Note: Check and carefully count the required ports for extended links. Especially in a
stretched cluster environment, you might need many of the higher-cost longwave
gigabit interface converters (GBICs).
Design the iSCSI network according to the requirements for high availability and best
performance. Consider the total number of ports and bandwidth that is needed between
the host and the SVC.
Determine the SVC service IP address.
Determine the IP addresses for the SVC system and for the host that connects through
iSCSI.
Determine the IP addresses for IP replication.
Define a naming convention for the SVC nodes, host, and storage subsystem.
Define the managed disks (MDisks) in the disk subsystem.
Define the storage pools. The storage pools depend on the disk subsystem that is in place
and the data migration requirements.
Plan the logical configuration of the volume within the I/O Groups and the storage pools to
optimize the I/O load between the hosts and the SVC.
Plan for the physical location of the equipment in the rack.
2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high, is included,
and can operate on the following node types only:
SVC 2145-8A4
SVC 2145-8G4
SVC 2145-CF8
SVC 2145-CG8
When the 2145 UPS-1U is configured, the voltage that is supplied to it must be 200 - 240 V,
single phase.
Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.
2145 DH8
This model includes two integrated AC power supplies and battery units, replacing the
uninterruptible power supply feature that was required on the previous generation storage
engine models.
The functionality of uninterruptible power supply units is provided by internal batteries, which
are delivered with each node’s hardware. They ensure sufficient internal power to keep a
node operational to perform this dump when the external power is removed.
After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts
down.
For more information about IBM SAN Volume Controller 2145-DH8 Introduction and
Implementation, SG24-8229, see this website:
http://www.redbooks.ibm.com/abstracts/SG248229.html?Open
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
Figure 3-1 on page 78 shows a power cabling example for the 2145-CG8.
You must follow the guidelines for Fibre Channel (FC) cable connections. Occasionally, the
introduction of a new SVC hardware model means internal changes. One example is the
worldwide port name (WWPN) mapping in the port mapping. The 2145-8A4, 2145-8G4,
2145-CF8, and 2145-CG8 have the same mapping.
Figure 3-3 on page 80 shows a sample layout in which nodes within each I/O Group are split
between separate racks. This layout protects against power failures and other events that
affect only a single rack.
Each node in an SVC clustered system must have at least one Ethernet connection.
Starting with SVC 6.1, the system management is performed through an embedded GUI that
is running on the nodes. A separate console, such as the traditional SVC Hardware
Management Console (HMC) or IBM System Storage Productivity Center (SSPC), is no
longer required to access the management interface. To access the management GUI, you
direct a web browser to the system management
IP address.
The clustered system first must be created that specifies an IPv4 or an IPv6 system address
for port 1. After the clustered system is created, more IP addresses can be created on port 1
and port 2 until both ports have an IPv4 and an IPv6 address that are defined. This design
allows the system to be managed on separate networks, therefore providing redundancy if a
network fails.
Support for iSCSI provides one other IPv4 and one other IPv6 address for each Ethernet port
on every node. These IP addresses are independent of the clustered system configuration
IP addresses.
The SVC Model 2145-CG8 optionally can have a serial-attached SCSI (SAS) adapter with
external ports disabled or a high-speed 10 Gbps Ethernet adapter with two ports. Two more
IPv4 or IPv6 addresses are required in both cases.
When you are accessing the SVC through the GUI or Secure Shell (SSH), choose one of the
available IP addresses to which to connect. No automatic failover capability is available. If one
network is down, use an IP address on the alternative network. Clients might use intelligence
in domain name servers (DNS) to provide partial failover.
The hosts cannot directly see or operate LUNs on the disk subsystems that are assigned to
the SVC system. The SVC nodes within an SVC system must see each other and all of the
storage that is assigned to the SVC system.
The zoning capabilities of the SAN switch are used to create three distinct zones. The SVC
7.4 supports 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps FC fabric, depending on the hardware
platform and on the switch where the SVC is connected. In an environment where you have a
fabric with multiple-speed switches, the preferred practice is to connect the SVC and the disk
subsystem to the switch operating at the highest speed.
All SVC nodes in the SVC clustered system are connected to the same SANs, and they
present volumes to the hosts. These volumes are created from storage pools that are
composed of MDisks that are presented by the disk subsystems. The fabric must have the
following distinct zones:
SVC clustered system zone
Create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC
internode communication.
Host zones
Create an SVC host zone for each server that is accessing storage from the SVC system.
Storage zone
Create one SVC storage zone for each storage subsystem that is virtualized by the SVC.
Additionally, isolating remote replication traffic on dedicated ports is beneficial. This isolation
ensures that problems that affect the cluster-to-cluster interconnection do not adversely affect
ports on the primary cluster and therefore affect the performance of workloads running on the
primary cluster.
IBM recommends the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 3-1 on page 84.
Important: Be careful when you perform the zoning so that inter-node ports are not used
for Host/Storage traffic in the 8-port and 12-port configurations.
This recommendation provides the traffic isolation that you want and also simplifies migration
from existing configurations with only four ports, or even later migration from 8-port or 12-port
configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be
considered. However, these approaches do not appreciably increase availability of the
solution because the mean time between failures (MTBF) of the adapter is not significantly
less than that of the non-redundant node components.
Consider that although it is true that alternate port mappings that spread traffic across host
bus adapters (HBAs) can allow adapters to come back online following a failure, they will not
prevent a node from going offline temporarily to reboot and attempt to isolate the failed
adapter and then rejoin the cluster. Our recommendation considers these issues with a view
that the greater complexity might lead to migration challenges in the future, and therefore, the
simpler approach is best.
Important: Failure to follow these configuration rules exposes the clustered system to
an unwanted condition that can result in the loss of host access to volumes.
If an intercluster link becomes severely and abruptly overloaded, the local FC fabric can
become congested so that no FC ports on the local SVC nodes can perform local
intracluster heartbeat communication. This situation can, in turn, result in the nodes
experiencing lease expiry events. In a lease expiry event, a node reboots to attempt to
reestablish communication with the other nodes in the clustered system. If the leases
for all nodes expire simultaneously, a loss of host access to volumes can occur during
the reboot events.
Configure your SAN so that FC traffic can be passed between the two clustered systems.
To configure the SAN this way, you can connect the clustered systems to the same SAN,
merge the SANs, or use routing technologies.
Configure zoning to allow all of the nodes in the local fabric to communicate with all of the
nodes in the remote fabric.
Optional: Modify the zoning so that the hosts that are visible to the local clustered system
can recognize the remote clustered system. This capability allows a host to have access to
data in the local and remote clustered systems.
Verify that clustered system A cannot recognize any of the back-end storage that is owned
by clustered system B. A clustered system cannot access logical units (LUs) that a host or
another clustered system can also access.
Figure 3-6 on page 87 shows an example of the SVC, host, and storage subsystem
connections.
You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.
Zoning examples
Figure 3-7 shows an SVC clustered system zoning example.
1 1 1 1 SVC 2 2 2 2 SVC
SVC 1 1 2 3 4 Port #
SVC 2
1 2 3 4 Port #
Fabric ID 21 Fabric ID 22
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 Ports 0 1 2 3
1 2 1 2 SVC # 1 2 1 2 SVC #
Storwize Family
1 2 1 2 SVC # 1 2 1 2 SVC #
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
V1
V2
E1 E2
Storwize Family
EMC
²
P1 P2
Fabric ID 21 Fabric ID 22
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
AC AC
DC DC
SVC-Power System Zone P1: Zoning Info: SVC-Power System Zone P2:
Fabric Domain ID, Port One Power System Fabric Domain ID, Port
21,1 - 11,0 - 11,1 Port and one SVC 22,1 - 12,2 - 12,3
Port per SVC Node
Next, we describe several ways in which you can configure the SVC 6.1 or later.
Figure 3-10 shows the use of IPv4 management and iSCSI addresses in the same subnet.
Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate
subnets.
Figure 3-13 on page 92 shows the use of a redundant network and a third subnet for
management.
Figure 3-14 shows the use of a redundant network for iSCSI data and management.
Important: During the individual VLAN configuration for each IP address, if the VLAN
settings for the local and failover ports on two nodes of an I/O Group differ, the switches
must be configured so that failover VLANs are configured on the local switch ports, too.
Then, the failover of IP addresses from the failing node to the surviving node succeeds. If
this configuration is not done, paths are lost to the SVC storage during a node failure.
3.3.4 IP Mirroring
One of the most important new functions of version 7.2 in the Storwize family is IP replication,
which enables the use of lower-cost Ethernet connections for remote mirroring. The capability
is available as a chargeable option (Metro or Global Mirror) on all Storwize family systems.
The new function is transparent to servers and applications in the same way that traditional
FC-based mirroring is transparent. All remote mirroring modes (Metro Mirror, Global Mirror,
and Global Mirror with changed volumes) are supported.
The configuration of the system is straightforward. Storwize family systems normally can find
each other in the network and can be selected from the GUI. IP replication includes
Bridgeworks SANSlide network optimization technology and is available at no additional
charge. Remote mirror is a chargeable option but the price does not change with IP
replication. Existing remote mirror users have access to the new function at no additional
charge.
IP connections that are used for replication can have a long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many hops between
switches and other appliances in the network. Traditional replication solutions transmit data,
wait for a response, and then transmit more data, which can result in network usage as low as
20% (based on IBM measurements). This situation gets worse as the latency gets longer.
Bridgeworks SANSlide technology that is integrated with the IBM Storwize family requires no
separate appliances; therefore, no other costs and configuration are necessary. It uses AI
technology to transmit multiple data streams in parallel and adjusts automatically to changing
network environments and workloads.
Because SANSlide does not use compression, it is independent of application or data type.
Most importantly, SANSlide improves network bandwidth usage up to 3x, so clients might be
able to deploy a less costly network infrastructure or use faster data transfer to speed
replication cycles, improve remote data currency, and recover more quickly.
Note: The limiting factor of the distance is the round-trip time. The maximum supported
round-trip time between sites is 80 milliseconds (ms) for a 1 Gbps link. For a 10 Gbps link,
the maximum supported round-trip time between sites is 10 ms.
Figure 3-15 shows the schematic way to connect two sides through IP mirroring.
Figure 3-16 on page 95 and Figure 3-17 on page 95 show configuration possibilities for
connecting two sites through IP mirroring. Figure 3-16 on page 95 shows the configuration
with single links.
The administrator must configure at least one port on each site to use with the link.
Configuring more than one port means that replication continues, even if a node fails.
Figure 3-17 shows a redundant IP configuration with two links.
As shown in Figure 3-17, the following replication group setup for dual redundant links is
used:
Replication Group 1: Four IP addresses, each on a different node (green)
Replication Group 2: Four IP addresses, each on a different node (orange)
Figure 3-18 shows the configuration of an IP partnership. You can obtain the requirements to
set up an IP partnership at this weblink:
https://ibm.biz/BdEpPB
Preferred practices
The following preferred practices are suggested for IP replication:
Configure two physical links between sites for redundancy.
Configure Ethernet ports that are dedicated for Remote Copy. Do not allow iSCSI host
attach for these Ethernet ports.
Configure remote copy port group IDs on both nodes for each physical link to survive node
failover.
A minimum of four nodes are required for dual redundant links to work across node
failures. If a node failure occurs on a two-node system, one link is lost.
Do not zone in two SVC systems over FC/FCOE when an IP partnership exists.
Configure CHAP secret-based authentication, if required.
The maximum supported round-trip time between sites is 80 ms for a 1 Gbps link.
The maximum supported round-trip time between sites is 10 ms for a 10 Gbps link.
For IP partnerships, the recommended method of copying is Global Mirror with changed
volumes because of the performance benefits. Also, Global Mirror and Metro Mirror might
be more susceptible to the loss of synchronization.
The amount of inter-cluster heartbeat traffic is 1 Mbps per link.
The minimum bandwidth requirement for the inter-cluster link is 10 Mbps. However, this
bandwidth scales up with the amount of host I/O that you choose to use.
For more information about supported storage subsystems, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, storage controllers that are used by the SVC clustered system must be
connected through SAN switches. Direct connection between the SVC and the storage
controller is not supported.
Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is a preferred
practice. Therefore, canister A in the V3700 subsystem can be connected to SAN A only,
or to SAN A and SAN B. Also, canister B in the V3700 subsystem can be connected to
SAN B only, or to SAN B and SAN A.
Split-controller configurations are supported by certain rules and configuration guidelines.
For more information, see 3.3.7, “Stretched cluster system configuration” on page 101.
All SVC nodes in an SVC clustered system must see the same set of ports from each
storage subsystem controller. Violating this guideline causes the paths to become
degraded. This degradation can occur as a result of applying inappropriate zoning and
LUN masking. This guideline has important implications for a disk subsystem, such as
DS3000, V3700, V5000, or V7000, which imposes exclusivity rules regarding to which
HBA WWPNs a storage partition can be mapped.
MDisks within storage pools: SVC 6.1 and later provide for better load distribution
across paths within storage pools.
In previous code levels, the path to MDisk assignment was made in a round-robin fashion
across all MDisks that are configured to the clustered system. With that method, no
attention is paid to how MDisks within storage pools are distributed across paths.
Therefore, it is possible and even likely that certain paths are more heavily loaded than
others.
This condition is more likely to occur with a smaller number of MDisks in the storage pool.
Starting with SVC 6.1, the code contains logic that considers MDisks within storage pools.
Therefore, the code more effectively distributes their active paths that are based on the
storage controller ports that are available.
The Detect MDisk commands must be run following the creation or modification (addition
of or removal of MDisk) of storage pools for paths to be redistributed.
In general, configure disk subsystems as though no SVC exists. However, we suggest the
following specific guidelines:
Disk drives:
– Exercise caution with large disk drives so that you do not have too few spindles to
handle the load.
– RAID 5 is suggested for most workloads.
Array sizes:
– An array size of 8+P or 4+P is suggested for the IBM DS4000® and DS5000™
families, if possible.
– Use the DS4000 segment size of 128 KB or larger to help the sequential performance.
– Upgrade to EXP810 drawers, if possible.
– Create LUN sizes that are equal to the RAID array and rank size. If the array size is
greater than 2 TB and the disk subsystem does not support MDisks that are larger than
2 TB, create the minimum number of LUNs of equal size.
– An array size of 7+P is suggested for the V3700, V5000, and V7000 Storwize families.
– When you are adding more disks to a subsystem, consider adding the new MDisks to
existing storage pools versus creating more small storage pools.
– Auto balancing was introduced in version 7.3 to restripe volume extents evenly across
all MDisks in the storage pools.
– A maximum of 1,024 worldwide node names (WWNNs) are available per cluster:
• EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port.
Each WWNN appears as a separate controller to the SVC.
• IBM, EMC CLARiiON, and HP use one WWNN per subsystem. Each WWNN
appears as a single controller with multiple ports/worldwide port names (WWPNs),
for a maximum of 16 ports/WWPNs per WWNN.
DS8000 that uses four or eight of the four-port HA cards:
– Use ports 1 and 3 or ports 2 and 4 on each card. (It does not matter for 8 Gb cards.)
This setup provides eight or 16 ports for the SVC use.
– Use eight ports minimum, up to 40 ranks.
– Use 16 ports for 40 or more ranks. Sixteen is the maximum number of ports.
DS4000/DS5000 (EMC CLARiiON/CX):
– Both systems have the preferred controller architecture, and the SVC supports this
configuration.
– Use a minimum of four ports, and preferably eight or more ports, up to a maximum of
16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC.
– Support is available for mapping controller A ports to Fabric A and controller B ports to
Fabric B or cross-connecting ports to both fabrics from both controllers. The
cross-connecting approach is preferred to avoid Automatic Volume Transfer
(AVT)/Trespass from occurring if a fabric fails or all paths to a fabric fail.
100 Implementing the IBM System Storage SAN Volume Controller V7.4
3.3.6 SAN Volume Controller clustered system configuration
To ensure high availability in SVC installations, consider the following guidelines when you
design a SAN with the SVC:
All nodes in a clustered system must be in the same LAN segment because the nodes in
the clustered system must assume the same clustered system or service IP address.
Ensure that the network configuration allows any of the nodes to use these IP addresses.
If you plan to use the second Ethernet port on each node, two LAN segments can be
used. However, port 1 of every node must be in one LAN segment, and port 2 of every
node must be in the other LAN segment.
To maintain application uptime in the unlikely event of an individual SVC node failing, SVC
nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but it is still a valid
configuration. The remaining node operates in write-through mode, which means that the
data is written directly to the disk subsystem. (The cache is disabled for the write.)
The uninterruptible power supply unit must be in the same rack as the node to which it
provides power, and each uninterruptible power supply unit can have only one connected
node.
The FC SAN connections between the SVC node and the switches are optical fiber. These
connections can run at 2 Gbps, 4 Gbps, 8 Gbps, or 16 Gbps (DH8), depending on your
SVC and switch hardware. The 2145-CG8, 2145-CF8, 2145-8A4, 2145-8G4, and
2145-DH8 SVC nodes auto-negotiate the connection speed with the switch.
The SVC node ports must be connected to the FC fabric only. Direct connections between
the SVC and the host, or the disk subsystem, are unsupported.
Two SVC clustered systems cannot have access to the same LUNs within a disk
subsystem. Configuring zoning so that two SVC clustered systems have access to the
same LUNs (MDisks) will likely result in data corruption.
The two nodes within an I/O Group can be co-located (within the same set of racks) or can
be in separate racks and separate rooms. For more information, see 3.3.7, “Stretched
cluster system configuration” on page 101.
The SVC uses three MDisks as quorum disks for the clustered system. A preferred
practice for redundancy is to have each quorum disk in a separate storage subsystem,
where possible. The current locations of the quorum disks can be displayed by using the
lsquorum command and relocated by using the chquorum command.
ISL configuration:
– ISLs are located between the SVC nodes.
– Maximum distance is similar to Metro Mirror distances.
– Physical requirements are similar to Metro Mirror requirements.
– ISL distance extension with active and passive WDM devices is supported.
Figure 3-20 shows an example of a split cluster with ISL configuration.
102 Implementing the IBM System Storage SAN Volume Controller V7.4
Use the stretched-cluster system configuration with the volume mirroring option to realize an
availability benefit. After volume mirroring is configured, use the
lscontrollerdependentvdisks command to validate that the volume mirrors are on separate
storage controllers. Having the volume mirrors on separate storage controllers ensures that
access to the volumes is maintained if a storage controller is lost.
When you are implementing a stretched-cluster system configuration, two of the three
quorum disks can be co-located in the same room where the SVC nodes are located.
However, the active quorum disk must be in a separate room. This configuration ensures that
a quorum disk is always available, even after a single-site failure.
For stretched-cluster system configuration, configure the SVC in the following manner:
Site 1: Half of the SVC clustered system nodes and one quorum disk candidate
Site 2: Half of the SVC clustered system nodes and one quorum disk candidate
Site 3: Active quorum disk
For more information about stretched-cluster configurations, see Appendix C, “SAN Volume
Controller stretched cluster” on page 903.
For more information, see IBM SAN Volume Controller Enhanced Stretched Cluster with
VMware, SG24-8211:
http://www.redbooks.ibm.com/abstracts/sg248211.html?Open
MDisks in the SVC are LUNs that are assigned from the underlying disk subsystems to the
SVC and can be managed or unmanaged. A managed MDisk is an MDisk that is assigned to
a storage pool. Consider the following points:
A storage pool is a collection of MDisks. An MDisk can be contained only within a single
storage pool.
An SVC supports up to 128 storage pools.
The number of volumes that can be allocated from a storage pool is unlimited; however, an
I/O Group is limited to 2,048, and the clustered system limit is 8,192.
Volumes are associated with a single storage pool, except in cases where a volume is
being migrated or mirrored between storage pools.
The SVC supports extent sizes of 16 MiB, 32 MiB, 64 MiB, 128 MiB, 256 MiB, 512 MiB, 1024
MiB, 2048 MiB, 4096 MiB, and 8192 MiB. Support for extent sizes 4096 MiB and 8192 MiB
was added in SVC 6.1. The extent size is a property of the storage pool and is set when the
storage pool is created. All MDisks in the storage pool have the same extent size, and all
volumes that are allocated from the storage pool have the same extent size. The extent size
of a storage pool cannot be changed. If you want another extent size, the storage pool must
be deleted and a new storage pool configured.
Table 3-2 on page 104 lists all of the available extent sizes in an SVC.
16 64 TiB
32 128 TiB
64 256 TiB
256 1 PiB
512 2 PiB
1024 4 PiB
2048 8 PiB
4096 16 PiB
8192 32 PiB
104 Implementing the IBM System Storage SAN Volume Controller V7.4
The storage pool and SVC cache relationship
The SVC employs cache partitioning to limit the potentially negative effect that a poorly
performing storage controller can have on the clustered system. The partition allocation
size is based on the number of configured storage pools. This design protects against
individual controller overloading and failures from using write cache and degrading the
performance of the other storage pools in the clustered system. For more information, see
2.8.3, “Cache” on page 46.
Table 3-3 shows the limit of the write-cache data.
Table 3-3 Limit of the cache data
Number of storage pools Upper limit
1 100%
2 66%
3 40%
4 30%
5 or more 25%
Consider the rule that no single partition can occupy more than its upper limit of cache
capacity with write data. These limits are upper limits, and they are the points at which the
SVC cache starts to limit incoming I/O rates for volumes that are created from the storage
pool. If a particular partition reaches this upper limit, the net result is the same as a global
cache resource that is full. That is, the host writes are serviced on a one-out-one-in basis
because the cache destages writes to the back-end disks.
However, only writes that are targeted at the full partition are limited. All I/O that is
destined for other (non-limited) storage pools continues as normal. The read I/O requests
for the limited partition also continue normally. However, because the SVC is destaging
write data at a rate that is greater than the controller can sustain (otherwise, the partition
does not reach the upper limit), read response times are also likely affected.
The storage pool defines which MDisks that are provided by the disk subsystem make up the
volume. The I/O Group, which is made up of two nodes, defines which SVC nodes provide I/O
access to the volume.
Important: No fixed relationship exists between I/O Groups and storage pools.
106 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: Keep a warning level on the used capacity so that it provides adequate
time to respond and provision more physical capacity.
– When you create a thin-provisioned volume, you can choose the grain size for
allocating space in 32 KiB, 64 KiB, 128 KiB, or 256 KiB chunks. The grain size that you
select affects the maximum virtual capacity for the thin-provisioned volume. The default
grain size is 256 KiB, which is the recommended option. If you select 32 KiB for the
grain size, the volume size cannot exceed 260,000 GiB.
The grain size cannot be changed after the thin-provisioned volume is created.
Generally, smaller grain sizes save space but require more metadata access, which
can adversely affect performance. If you are not going to use the thin-provisioned
volume as a FlashCopy source or target volume, use 256 KiB to maximize
performance. If you are going to use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the
FlashCopy function.
– Thin-provisioned volumes require more I/Os because of directory accesses. For truly
random workloads with 70% read and 30% write, a thin-provisioned volume requires
approximately one directory I/O for every user I/O.
– The directory is two-way write-back-cached (as with the SVC fastwrite cache), so
certain applications perform better.
– Thin-provisioned volumes require more processor processing, so the performance per
I/O Group can also be reduced.
– A thin-provisioned volume feature that is called zero detect provides clients with the
ability to reclaim unused allocated disk space (zeros) when they are converting a fully
allocated volume to a thin-provisioned volume by using volume mirroring.
Volume mirroring guidelines:
– Create or identify two separate storage pools to allocate space for your mirrored
volume.
– Allocate the storage pools that contain the mirrors from separate storage controllers.
– If possible, use a storage pool with MDisks that share characteristics. Otherwise, the
volume performance can be affected by the poorer performing MDisk.
Multipathing: We suggest the following number of paths per volume (n+1 redundancy):
With two HBA ports, zone the HBA ports to the SVC ports 1 - 2 for a total of four
paths.
With four HBA ports, zone the HBA ports to the SVC ports 1 - 1 for a total of four
paths.
Optional (n+2 redundancy): With four HBA ports, zone the HBA ports to the SVC
ports 1 - 2 for a total of eight paths.
We use the term HBA port to describe the SCSI Initiator. We use the term SAN Volume
Controller port to describe the SCSI target.
The maximum number of host paths per volume must not exceed eight.
If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports
to maximize high availability and performance.
To configure greater than 256 hosts, you must configure the host to I/O Group mappings
on the SVC. Each I/O Group can contain a maximum of 256 hosts, so it is possible to
create 1,024 host objects on an eight-node SVC clustered system. Volumes can be
mapped only to a host that is associated with the I/O Group to which the volume belongs.
Port masking
You can use a port mask to control the node target ports that a host can access, which
satisfies the following requirements:
– As part of a security policy to limit the set of WWPNs that can obtain access to any
volumes through an SVC port
– As part of a scheme to limit the number of logins with mapped volumes visible to a host
multipathing driver, such as SDD, and therefore limit the number of host objects that
are configured without resorting to switch zoning
The port mask is an optional parameter of the mkhost and chhost commands. The port
mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all
ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value
is 1111 (all ports enabled).
108 Implementing the IBM System Storage SAN Volume Controller V7.4
The SVC supports connection to the Cisco MDS family and Brocade family. For more
information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004946
Layers: SVC 6.3 introduced a new property that is called layer for the clustered system.
This property is used when a copy services partnership exists between an SVC and an
IBM Storwize V7000. There are two layers: replication and storage. All SVC clustered
systems are replication layers and cannot be changed. By default, the IBM Storwize V7000
is a storage layer, which must be changed by using CLI command chsystem before you use
it to make any copy services partnership with the SVC.
SVC Advanced Copy Services must apply the guidelines that are described next.
FlashCopy guidelines
Consider the following FlashCopy guidelines:
Identify each application that must have a FlashCopy function that is implemented for its
volume.
FlashCopy is a relationship between volumes. Those volumes can belong to separate
storage pools and separate storage subsystems.
You can use FlashCopy for backup purposes by interacting with the Tivoli Storage
Manager Agent, or for cloning a particular environment.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
110 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 3-22 Metro Mirror connections
Figure 3-22 contains two redundant fabrics. Part of each fabric exists at the local clustered
system and at the remote clustered system. No direct connection exists between the two
fabrics.
Technologies for extending the distance between two SVC clustered systems can be broadly
divided into the following categories:
FC extenders
SAN multiprotocol routers
Because of the more complex interactions that are involved, IBM explicitly tests products of
this class for interoperability with the SVC. For more information about the current list of
supported SAN routers in the supported hardware list, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004946
IBM tested a number of FC extenders and SAN router technologies with the SVC. You must
plan, install, and test FC extenders and SAN router technologies with the SVC so that the
following requirements are met:
The round-trip latency between sites must not exceed 80 ms (40 ms one way). For Global
Mirror, this limit allows a distance between the primary and secondary sites of up to
8000 km (4970.96 miles) by using a planning assumption of 100 km (62.13 miles) per
1 ms of round-trip link latency.
The latency of long-distance links depends on the technology that is used to implement
them. A point-to-point dark fiber-based link often provides a round-trip latency of 1 ms per
100 km (62.13 miles) or better. Other technologies provide longer round-trip latencies,
which affect the maximum supported distance.
The configuration must be tested with the expected peak workloads.
When Metro Mirror or Global Mirror is used, a certain amount of bandwidth is required for
the IBM SVC intercluster heartbeat traffic. The amount of traffic depends on how many
nodes are in each of the two clustered systems.
Figure 3-23 on page 112 shows the amount of heartbeat traffic, in megabits per second,
that is generated by various sizes of clustered systems.
These numbers represent the total traffic between the two clustered systems when no I/O
is taking place to mirrored volumes. Half of the data is sent by one clustered system, and
half of the data is sent by the other clustered system. The traffic is divided evenly over all
available intercluster links. Therefore, if you have two redundant links, half of this traffic is
sent over each link during fault-free operation.
The bandwidth between sites must, at the least, be sized to meet the peak workload
requirements, in addition to maintaining the maximum latency that was specified
previously. You must evaluate the peak workload requirement by considering the average
write workload over a period of 1 minute or less, plus the required synchronization copy
bandwidth.
With no active synchronization copies and no write I/O disks in Metro Mirror or Global
Mirror relationships, the SVC protocols operate with the bandwidth that is indicated in
Figure 3-23. However, you can determine the true bandwidth that is required for the link
only by considering the peak write bandwidth to volumes that are participating in Metro
Mirror or Global Mirror relationships and adding it to the peak synchronization copy
bandwidth.
If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency statements continue to
be true, even during single failure conditions.
The configuration is tested to simulate the failure of the primary site (to test the recovery
capabilities and procedures), including eventual failback to the primary site from the
secondary.
The configuration must be tested to confirm that any failover mechanisms in the
intercluster links interoperate satisfactorily with the SVC.
The FC extender must be treated as a normal link.
The bandwidth and latency measurements must be made by, or on behalf of, the client.
They are not part of the standard installation of the SVC by IBM. Make these
measurements during installation and record the measurements. Testing must be
repeated following any significant changes to the equipment that provides the intercluster
link.
112 Implementing the IBM System Storage SAN Volume Controller V7.4
Use a SAN performance monitoring tool, such as IBM Tivoli Storage Productivity Center,
which allows you to continuously monitor the SAN components for error conditions and
performance problems. This tool helps you detect potential issues before they affect your
disaster recovery solution.
The long-distance link between the two clustered systems must be provisioned to allow for
the peak application write workload to the Global Mirror source volumes and the
client-defined level of background copy.
The peak application write workload ideally must be determined by analyzing the SVC
performance statistics.
Statistics must be gathered over a typical application I/O workload cycle, which might be
days, weeks, or months, depending on the environment on which the SVC is used. These
statistics must be used to find the peak write workload that the link must support.
Characteristics of the link can change with use. For example, latency can increase as the
link is used to carry an increased bandwidth. The user must be aware of the link’s behavior
in such situations and ensure that the link remains within the specified limits. If the
characteristics are not known, testing must be performed to gain confidence of the link’s
suitability.
Users of Global Mirror must consider how to optimize the performance of the
long-distance link, which depends on the technology that is used to implement the link. For
example, when you are transmitting FC traffic over an IP link, you might want to enable
jumbo frames to improve efficiency.
The use of Global Mirror and Metro Mirror between the same two clustered systems is
supported.
The use of Global Mirror and Metro Mirror between the SVC clustered system and IBM
Storwize systems with a minimum code level of 6.3 is supported.
Support exists for cache-disabled volumes to participate in a Global Mirror relationship;
however, this design is not a preferred practice.
The gmlinktolerance parameter of the remote copy partnership must be set to an
appropriate value. The default value is 300 seconds (5 minutes), which is appropriate for
most clients.
During SAN maintenance, the user must choose to reduce the application I/O workload
during maintenance (so that the degraded SAN components can manage the new
workload); disable the gmlinktolerance feature; increase the gmlinktolerance value
(which means that application hosts might see extended response times from Global
Mirror volumes); or stop the Global Mirror relationships.
If the gmlinktolerance value is increased for maintenance lasting x minutes, it must be
reset only to the normal value x minutes after the end of the maintenance activity.
If gmlinktolerance is disabled during maintenance, it must be re-enabled after the
maintenance is complete.
Global Mirror volumes must have their preferred nodes evenly distributed between the
nodes of the clustered systems. Each volume within an I/O Group has a preferred node
property that can be used to balance the I/O load between nodes in that group.
Figure 3-24 on page 114 shows the correct relationship between volumes in a Metro
Mirror or Global Mirror solution.
The capabilities of the storage controllers at the secondary clustered system must be
provisioned to allow for the peak application workload to the Global Mirror volumes, plus
the client-defined level of background copy, plus any other I/O being performed at the
secondary site. The performance of applications at the primary clustered system can be
limited by the performance of the back-end storage controllers at the secondary clustered
system to maximize the amount of I/O that applications can perform to Global Mirror
volumes.
A complete review must be performed before Serial Advanced Technology Attachment
(SATA) for Metro Mirror or Global Mirror secondary volumes is used. The use of a slower
disk subsystem for the secondary volumes for high-performance primary volumes can
mean that the SVC cache might not be able to buffer all the writes, and flushing cache
writes to SATA might slow I/O at the production site.
Storage controllers must be configured to support the Global Mirror workload that is
required of them. You can dedicate storage controllers to only Global Mirror volumes,
configure the controller to ensure sufficient quality of service (QoS) for the disks that are
used by Global Mirror, or ensure that physical disks are not shared between Global Mirror
volumes and other I/O, for example, by not splitting an individual RAID array.
MDisks within a Global Mirror storage pool must be similar in their characteristics, for
example, RAID level, physical disk count, and disk speed. This requirement is true of all
storage pools, but maintaining performance is important when Global Mirror is used.
When a consistent relationship is stopped, for example, by a persistent I/O error on the
intercluster link, the relationship enters the consistent_stopped state. I/O at the primary
site continues, but the updates are not mirrored to the secondary site. Restarting the
relationship begins the process of synchronizing new data to the secondary disk. While
this synchronization is in progress, the relationship is in the inconsistent_copying state.
Therefore, the Global Mirror secondary volume is not in a usable state until the copy
completes and the relationship returns to a Consistent state. For this reason, it is highly
advisable to create a FlashCopy of the secondary volume before the relationship is
restarted. When started, the FlashCopy provides a consistent copy of the data, even while
the Global Mirror relationship is copying.
If the Global Mirror relationship does not reach the Synchronized state (for example, if the
intercluster link experiences further persistent I/O errors), the FlashCopy target can be
used at the secondary site for disaster recovery purposes.
114 Implementing the IBM System Storage SAN Volume Controller V7.4
If you plan to use a Fibre Channel over IP (FCIP) intercluster link, it is important to design
and size the pipe correctly.
Example 3-2 shows a best-guess bandwidth sizing formula.
116 Implementing the IBM System Storage SAN Volume Controller V7.4
3.4 Performance considerations
Although storage virtualization with the SVC improves flexibility and provides simpler
management of a storage infrastructure, it can also provide a substantial performance
advantage for various workloads. The SVC caching capability and its ability to stripe volumes
across multiple disk arrays are the reasons why performance improvement is significant when
it is implemented with midrange disk subsystems. This technology is often provided only with
high-end enterprise disk subsystems.
Tip: Technically, almost all storage controllers provide both striping (RAID 5 or RAID 10)
and a form of caching. The real benefit is the degree to which you can stripe the data
across all MDisks in a storage pool and therefore, have the maximum number of active
spindles at one time. The caching is secondary. The SVC provides additional caching to
the caching that midrange controllers provide (usually a couple of GB). Enterprise systems
have much larger caches.
To ensure the performance that you want and capacity of your storage infrastructure,
undertake a performance and capacity analysis to reveal the business requirements of your
storage environment. When this analysis is done, you can use the guidelines in this chapter to
design a solution that meets the business requirements.
When you are considering performance for a system, always identify the bottleneck and,
therefore, the limiting factor of a specific system. You must also consider the component for
whose workload you identify a limiting factor. The component might not be the same
component that is identified as the limiting factor for other workloads.
When you are designing a storage infrastructure with the SVC or implementing an SVC in an
existing storage infrastructure, you must consider the performance and capacity of the SAN,
disk subsystems, SVC, and the known or expected workload.
3.4.1 SAN
The SVC now has the following models:
2145-8A4
2145-8F4
2145-8G4
2145-CF8
2145-CG8
2145-DH8
All of these models can connect to 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps switches. From a
performance point of view, connecting the SVC to 8 Gbps or 16 Gbps switches is better.
Correct zoning on the SAN switch brings together security and performance. Implement a
dual HBA approach at the host to access the SVC.
The SVC is designed to handle large quantities of multiple paths from the back-end storage.
In most cases, the SVC can improve performance, especially on mid-sized to low-end disk
subsystems, older disk subsystems with slow controllers, or uncached disk systems, for the
following reasons:
The SVC can stripe across disk arrays, and it can stripe across the entire set of supported
physical disk resources.
The SVC has a 4 GB, 8 GB, or 24 GB (48 GB with the optional processor card. 2145-CG8
only) cache. Model 2145-DH8 has 32 GB of cache.
The SVC can provide automated performance optimization of hot spots by using flash
drives and Easy Tier.
The SVC large cache and advanced cache management algorithms also allow it to improve
on the performance of many types of underlying disk technologies. The SVC capability to
manage (in the background) the destaging operations that are incurred by writes (in addition
to still supporting full data integrity) has the potential to be important in achieving good
database performance.
Depending on the size, age, and technology level of the disk storage system, the total cache
that is available in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in the upper (SVC) or
the lower (disk controller) level of the overall system, the system as a whole can use the
larger amount of cache wherever it is located. Therefore, if the storage control level of the
cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the
SVC cache.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on the underlying storage technology and the degree to which the workload exhibits
hotspots or sensitivity to cache size or cache algorithms.
For more information about the SVC cache partitioning capability, see IBM SAN Volume
Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
118 Implementing the IBM System Storage SAN Volume Controller V7.4
3.4.3 SAN Volume Controller
The SVC clustered system is scalable up to eight nodes, and the performance is nearly linear
when more nodes are added into an SVC clustered system until it becomes limited by other
components in the storage infrastructure. Although virtualization with the SVC provides a
great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems
that can deliver the performance that you want. Essentially, SVC performance improvements
are gained by having as many MDisks as possible, which creates a greater level of concurrent
I/O to the back end without overloading a single disk or array.
Assuming that no bottlenecks exist in the SAN or on the disk subsystem, you must follow
specific guidelines when you perform the following tasks:
Creating a storage pool
Creating volumes
Connecting to or configuring hosts that must receive disk space from an SVC clustered
system
For more information about performance and preferred practices for the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
For more information about using IBM Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
You have full management control of the SVC regardless of which method you choose. IBM
Tivoli Storage Productivity Center is a robust software product with various functions that
must be purchased separately.
If you have a previously installed SVC cluster in your environment, it is possible that you are
using the SVC Console, which is also known as the Hardware Management Console (HMC).
You can still use it with IBM Tivoli Storage Productivity Center. When you are using the
specific, retail product that is called IBM System Storage Productivity Center (SSPC), which
is no longer offered, you can log in to only your SVC from one of them at a time.
If you decide to manage your SVC cluster with the SVC CLI, it does not matter whether you
are using the SVC Console or IBM Tivoli Storage Productivity Center server because the
SVC CLI is on the cluster and accessed through Secure Shell (SSH), which can be installed
anywhere.
122 Implementing the IBM System Storage SAN Volume Controller V7.4
4.1.1 Network requirements for SAN Volume Controller
To plan your installation, consider the TCP/IP address requirements of the SVC cluster and
the requirements for the SVC cluster to access other services. You must also plan the
address allocation and the Ethernet router, gateway, and firewall configuration to provide the
required access and network security.
Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.
For more information about TCP/IP prerequisites, see Chapter 3, “Planning and
configuration” on page 73.
To assist you in starting an SVC initial configuration, Figure 4-3 on page 124 shows a
common flowchart that covers all of the types of management.
For our initial configuration (configuration 1), we use the following hardware:
2 x 2145-DH8 nodes
1 x 32 GB additional memory for each 2145-DH8 node (total 64 GB of memory per node)
1 x processor that is additional for each 2145-DH8 node (total two processors per node)
1 x Real-time Compression (RtC)
Accelerator card for each 2145-DH8 node
1 x four port host bus adapters (HBAs) in each 2145-DH8 node
2 x SAN switches (for a redundant SAN fabric)
124 Implementing the IBM System Storage SAN Volume Controller V7.4
The back-end storage consists of Storwize V7000 block storage arrays.
The first step is to connect a PC or notebook to the Technician Port on the rear of the SVC
node. See Figure 4-4 for the Technician Port. The Technician Port provides a Dynamic Host
Configuration Protocol (DHCP) IP address V4, so you must ensure that your PC or notebook
is configured for DHCP. The “default” IP address for a new node is 192.168.0.1.
The 2145-DH8 does not provide IPv6 IP addresses for the Technician Port.
Nodes: During the initial configuration, you see certificate warnings because the 2145
certificates are self-issued. Accept these warnings because they are not harmful.
4. This chapter focuses on setting up a new system, so we mark As the first node in a new
system and click Next.
Important: If you are adding 2145-DH8 nodes to an existing system, ensure that the
existing systems are running code level 7.3 or higher. The 2145-DH8 only supports
code level 7.3 or higher.
5. The next panel prompts you to set an IP address for the cluster. You can choose between
an IPv4 or IPv6 address. In Figure 4-7 on page 127, we set an IPv4 address.
126 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-7 Setting the IP address
6. When you click Next, the Create System panel opens, as shown in Figure 4-8.
9. Follow the instructions on Figure 4-10. Disconnect the Ethernet cable from the Technician
Port and your personal computer or notebook. Connect the same personal computer or
notebook to the same network as the system. Click Finish to be redirected to the GUI to
complete the system setup. You can connect to the system IP address from any
management console that is connected to the same network as the system.
10.Whether you are redirected from your personal computer or notebook or connecting to the
Management IP address of the system, the License Agreement panel opens
(Figure 4-11).
128 Implementing the IBM System Storage SAN Volume Controller V7.4
11.Read the license agreement and then click the Accept arrow. The login panel opens
(Figure 4-12).
12.You must type the default password for the account superuser, which is passw0rd (zero
and not o). When you click the Log in arrow, you are prompted to change the password
(Figure 4-13).
13.Type a new password and type it again to confirm it. The password length is 6 - 63
characters. The password cannot begin or end with a space. After you type the password
twice, click the Log in arrow again.
15.Click Next. You can choose to give the cluster a new name. We used ITSO_SVC2, as
shown in Figure 4-15 on page 131.
130 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-15 Cluster name
16.Click Apply and Next after you type the name of the cluster.
17.The next step is to set the time and date, as shown in Figure 4-16 on page 132.
18.In this case, we set the date and time manually. At this time, you cannot choose to use the
24-hour clock; however, you can change to the 24-hour clock after you complete the initial
configuration. However, we recommend that you use a Network Time Protocol (NTP)
server so that all of your SAN and storage devices have a common time stamp for
troubleshooting.
19.Click Apply and Next. The Licensed Functions panel opens, as shown in Figure 4-17 on
page 133.
132 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-17 Licensed Functions
20.Enter the total purchased capacity for your system as authorized by your license
agreement. (Figure 4-17 is only an example.) Click Apply and Next. The Configure
System Topology panel opens, as shown in Figure 4-18 on page 134.
21.You can either choose Single site or Multiple sites (which is also known as a stretched
system). If you are installing a stretched system, the panel that is shown in Figure 4-19 on
page 135 opens.
134 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-19 Site names
22.Choose the names that you want for each site or keep the defaults. (These names can be
changed after the initial configuration is complete.) Click Apply and Next.
23.The Assign Nodes panel opens (Figure 4-20 on page 136).
24.If the first node is located at the other site and you are on site2, for example, you can move
the node according to the site or site name. Click the icon with opposing arrows, as shown
in Figure 4-21 on page 137.
136 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-21 Node site change
25.As you can see now, node1 is now located in site2. This assignment can be reversed
again if the change was a mistake or you change your mind. You can use the other radio
button to scan for other node candidates (if you are configuring a system with more than
two nodes).
26.Next, add a node from the drop-down menu, as shown in Figure 4-22 on page 138.
27.Choose the node that complies with the correct panel ID for that site (if you have more
than two nodes in the cluster).
28.Figure 4-23 on page 139 shows that the node was added. The node icon changed from an
outline to an actual node icon.
138 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-23 New node added
29.Click Next. The External Storage panel (Figure 4-24 on page 140) opens. You use this
panel to optionally assign external storage to a site. This panel requires that the external
storage is already zoned to the SVC.
30.In the following panels, we show how to assign a storage controller to a specific site. In
Figure 4-25 on page 141, we choose the controller that we want to assign to a specific
site.
140 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-25 Modify Site controller0
31.Right-click the controller that you want to assign and select Modify Site. Use the
drop-down menu to select the site (Figure 4-26).
32.Choose the site where the controller is located. After you select the site, click Next.
33.The site name displays in the External Storage panel that is shown in Figure 4-27 on
page 142.
34.Repeat the same steps for all of the controllers and click Next. The external storage site
assignment is complete.
35.Figure 4-28 shows the Email Event Notifications configuration panel.
36.Setting up email event notifications is optional, but we recommend that you set them up.
The next panels show how to set up the email event notifications.
142 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: A valid Simple Mail Transfer Protocol (SMTP) server IP address must be
available to complete this step.
37.In the first panel, which is shown in Figure 4-29, set the system location information.
38.Click Next to enter the contact details, as shown in Figure 4-30 on page 144.
144 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-31 Email server IP address
41.You can click Ping to verify whether network access exists to the email server (SMTP
server).
42.Enter the local users who will receive notifications when an event occurs (Figure 4-32 on
page 146).
43.The callhome@de.ibm.com email address is a default user that cannot be deleted. You can
add more users to receive email notifications by clicking the plus (+) icon. You can select
the type of notifications to send to the defined users. The following notification types are
available:
– Errors
– Events
– Notifications
– Inventory
44.Click Apply and Next. The Summary panel opens (Figure 4-33 on page 147).
146 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-33 Summary
45.Click Finish to complete the initial configuration (configuration 1), which takes you to the
System overview panel. The view depends on whether the initial configuration was
created as a single site or as a stretched system. The following panel shows a single site
system (Figure 4-34).
46.If you configured the system as a stretched system, you see the panel that is shown in
Figure 4-35 on page 148.
Now, the initial configuration is complete. Next, you configure the storage systems, hosts, and
so on.
Figure 4-36 shows the SVC node for the 2145-8G4 and 2145-8A4 models.
Figure 4-37 on page 149 shows the SVC node 2145-CF8 front panel.
148 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-37 SVC CF8 front panel
SVC V6.1 and later code levels introduce a new method for performing service tasks. In
addition to performing service tasks from the front panel, you can service a node through an
Ethernet connection by using a web browser or the CLI. Another service IP address for each
node canister is required.
4.2.2 Prerequisites
Ensure that the SVC nodes are installed and that Ethernet connectivity and Fibre Channel
(FC) connectivity are configured correctly. For more information about the physical
connectivity to the SVC, see Chapter 3, “Planning and configuration” on page 73.
You perform the first step to create a cluster from the front panel of the SVC. The second step
is performed from a web browser by accessing the management GUI.
Follow these steps to perform the second configuration (configuration 2) of phase one:
1. Choose any node that is a member of the cluster that is being created.
Nodes: After you successfully create and initialize the cluster on the selected node, use
a separate process to add nodes to your cluster.
Important: During these steps, if a timeout occurs while you are entering the input for
the fields, you must begin again from step 2. All of the changes are lost, so ensure that
you have all of the information available before you begin again.
150 Implementing the IBM System Storage SAN Volume Controller V7.4
4. Depending on whether you are creating a cluster with an IPv4 address or an IPv6
address, press and release the Up or Down button until New Cluster IPv4? or New
Cluster IPv6? is displayed.
Figure 4-39 shows the various options for the cluster creation.
Figure 4-39 Cluster IPv4? and Cluster IPv6? options on the front panel display
If the New Cluster IPv4? or New Cluster IPv6? action is displayed, move to step 5.
If the New Cluster IPv4? or New Cluster IPv6? action is not displayed, this node is
already a member of a cluster. Complete the following steps:
a. Press and release the Up or Down button until Actions is displayed.
b. Press and release the Select button to return to the Main Options menu.
c. Press and release the Up or Down button until Cluster: is displayed. The name of the
cluster to which the node belongs is displayed on line two of the panel.
In this case, you have two options:
Your first option is to delete this node from the cluster by completing the following
steps:
i. Press and release the Up or Down button until Actions is displayed.
ii. Press and release the Select button.
iii. Press and release the Up or Down button until Remove Cluster? is displayed.
iv. Press and hold the Up button.
v. Press and release the Select button.
vi. Press and release the Up or Down button until Confirm remove? is displayed.
vii. Press and release the Select button.
viii.Release the Up button, which deletes the cluster information from the node.
ix. Return to step 1 on page 150 and start again.
Your second option (if you do not want to remove this node from an existing cluster) is
to review the situation to determine the correct nodes to include in the new cluster.
5. Press and release the Select button to create the cluster.
6. Press and release the Select button again to modify the IP address.
7. Use the Up or Down navigation button to change the value of the first field of the
IP address to the value that was chosen.
8. Use the Right navigation button to move to the next field. Use the Up or Down navigation
button to change the value of this field.
9. Repeat step 7 for each of the remaining fields of the IP address.
10.When the last field of the IP address is changed, press the Select button.
11.Press the Right arrow button:
– For IPv4, IPv4 Subnet: is displayed.
– For IPv6, IPv6 Prefix: is displayed.
12.Press the Select button.
13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were
changed. There is only a single field for IPv6 Prefix.
14.When the last field of IPv4 Subnet/IPv6 Mask is changed, press the Select button.
15.Press the Right navigation button:
– For IPv4, IPv4 Gateway: is displayed.
– For IPv6, IPv6 Gateway: is displayed.
16.Press the Select button.
17.Change the fields for the appropriate gateway in the same way that the IPv4/IPv6 address
fields were changed.
18.When the changes to all of the Gateway fields are made, press the Select button.
19.To review the settings before the cluster is created, use the Right and Left buttons. Make
any necessary changes, use the Right and Left buttons to see “Confirm Created?”, and
then press the Select button.
20.After you complete this task, the following information is displayed on the service display
panel:
– Cluster: is displayed on line one.
– A temporary, system-assigned cluster name that is based on the IP address is
displayed on line two.
If the cluster is not created, Create Failed: is displayed on line one of the service display.
Line two contains an error code. For more information about the error codes and to
identify the reason why the cluster creation failed and the corrective action to take, see
IBM System Storage SAN Volume Controller: Service Guide, GC26-7901.
When you create the cluster from the front panel with the correct IP address format, you can
finish the cluster configuration by accessing the management GUI, completing the Create
Cluster wizard, and adding other nodes to the cluster.
152 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: At this time, do not repeat this procedure to add other nodes to the cluster.
To add nodes to the cluster, follow the steps that are described in Chapter 9, “SAN Volume
Controller operations using the command-line interface” on page 493 and Chapter 10,
“SAN Volume Controller operations using the GUI” on page 655.
Important: Ensure that the SVC cluster IP address (svcclusterip) can be reached
successfully by using a ping command from the network.
4.3.2 Post-requisites
Perform the following steps to complete the SVC cluster configuration:
1. Configure the SSH keys for the command-line user, as shown in 4.4, “Secure Shell
overview” on page 154.
2. Configure user authentication and authorization.
3. Set up event notifications and inventory reporting.
4. Create the storage pools.
5. Add an MDisk to the storage pool.
6. Identify and create volumes.
7. Create a map host objects map.
8. Identify and configure the FlashCopy mappings and Metro Mirror relationship.
9. Back up configuration data.
Tip: If you choose not to create an SSH key pair, you can still access the SVC cluster by
using the SVC CLI, if you have a user password. You are authenticated through the user
name and password.
The connection is secured by using a private key and a public key pair. Securing the
connection includes the following steps:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the SSH server (SVC cluster).
3. A private key identifies the client. The private key is checked against the public key during
the connection. The private key must be protected.
4. Also, the SSH server must identify itself with a specific host key.
5. If the client does not have that host key yet, it is added to a list of known hosts.
SSH is the communication vehicle between the management system (the System Storage
Productivity Center or any workstation) and the SVC cluster.
The SSH client provides a secure environment from which to connect to a remote machine. It
uses the principles of public and private keys for authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administrative and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.
To use the CLI, an SSH client must be installed on that system, the SSH key pair must be
generated on the client system, and the client’s SSH public key must be stored on the SVC
clusters.
You must preinstall the freeware implementation of SSH2 for Microsoft Windows (which is
called PuTTY) on the System Storage Productivity Center or any other workstation. This
software provides the SSH client function for users who are logged in to the SVC Console
and who want to start the CLI to manage the SVC cluster.
4.4.1 Generating public and private SSH key pairs by using PuTTY
Complete the following steps to generate SSH keys on the SSH client system:
1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start → Programs → PuTTY → PuTTYgen.
2. In the PuTTY Key Generator GUI window (Figure 4-40 on page 155), complete the
following steps to generate the keys:
a. Select SSH-2 RSA.
154 Implementing the IBM System Storage SAN Volume Controller V7.4
b. Leave the number of bits in a generated key value at 1024.
c. Click Generate.
3. Move the cursor onto the blank area to generate the keys.
To generate keys: The blank area is the large blank rectangle on the GUI inside the
section of the GUI labeled Key (Figure 4-40 on page 155). Continue to move the mouse
pointer over the blank area until the progress bar reaches the far right. This action
generates random characters to create a unique key pair.
4. After the keys are generated, save them for later use by completing the following steps:
a. Click Save public key, as shown in Figure 4-41 on page 156.
b. You are prompted for a name, for example, pubkey, and a location for the public key, for
example, C:\Support Utils\PuTTY. Click Save.
If another name and location are chosen, ensure that you maintain a record of the
name and location. You must specify the name and location of this SSH public key in
the steps that are described in 4.4.2, “Uploading the SSH public key to the SAN
Volume Controller cluster” on page 157.
Tip: The PuTTY Key Generator saves the public key with no extension, by default.
Use the string pub in naming the public key, for example, pubkey, to differentiate the
SSH public key from the SSH private key easily.
156 Implementing the IBM System Storage SAN Volume Controller V7.4
e. When prompted, enter a name, for example, icat, and a location for the private key, for
example, C:\Support Utils\PuTTY. Click Save.
We suggest that you use the default name icat.ppk because this key was used for icat
application authentication and must have this default name in SVC clusters that are
running on versions before SVC 5.1.
Private key extension: The PuTTY Key Generator saves the private key with the
PPK extension.
4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster
After you create your SSH key pair, you must upload your SSH private key onto the SVC
cluster. Complete the following steps:
1. From your browser, enter https://svcclusteripaddress/.
Alternatively, from the GUI interface, you can go to the Access Management interface and
select Users.
2. In the next window, as shown in Figure 4-43, select Create User to create a user.
You completed the user creation process and uploaded the users’ SSH public key that is
paired later with the users’ private.ppk key, as described in 4.4.3, “Configuring the PuTTY
session for the CLI” on page 158. Figure 4-47 on page 161 shows the successful upload of
the SSH admin key.
The requirements for the SVC cluster setup by using the SVC cluster web interface are
complete.
158 Implementing the IBM System Storage SAN Volume Controller V7.4
Complete the following steps to configure the PuTTY session on the SSH client system:
1. From the System Storage Productivity Center on a Microsoft Windows desktop, select
Start → Programs → PuTTY → PuTTY to open the PuTTY Configuration GUI window.
2. From the Category pane on the left in the PuTTY Configuration window (Figure 4-45), click
Session if it is not selected.
Tip: The items that you select in the Category pane affect the content that appears in
the right pane.
3. Under the “Specify the destination you want to connect to” section in the right pane, select
SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures
that if any connection errors occur, they are displayed in the user’s window.
5. In the right pane, for the Preferred SSH protocol version, select 2.
6. From the Category pane on the left side of the PuTTY Configuration window, select
Connection → SSH → Auth.
160 Implementing the IBM System Storage SAN Volume Controller V7.4
7. As shown in Figure 4-47, in the “Private key file for authentication:” field under the
Authentication parameters section in the right pane, browse to or enter the fully qualified
directory path and file name of the SSH client private key file (for example, C:\Support
Utils\Putty\icat.PPK) that was created earlier.
You can skip the Connection → SSH → Auth part of the process if you created the user
only with password authentication and no SSH key.
Figure 4-47 PuTTY Configuration: Private key file location for authentication
8. From the Category pane on the left side of the PuTTY Configuration window, click
Session.
9. In the right pane, complete the following steps, as shown in Figure 4-48 on page 162:
a. Under the “Load, save, or delete a stored session” section, select Default Settings,
and then click Save.
b. For the Host name (or IP address) field, enter the IP address of the SVC cluster.
c. In the Saved Sessions field, enter a name (for example, SVC) to associate with this
session.
d. Click Save again.
You can now close the PuTTY Configuration window or leave it open to continue.
162 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-49 Open PuTTY command-line session
4. If this is the first time that you use the PuTTY application since you generated and
uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens that
warns that a mismatch exists between the private and public keys, as shown in
Figure 4-50. Click Yes. The CLI starts.
5. As shown in Example 4-1, the private key that is used in this PuTTY session is now
authenticated against the public key that was uploaded to the SVC cluster.
You completed the required tasks to configure the CLI for SVC administration from the SVC
Console. You can close the PuTTY session.
Note: You must reach the SVC cluster IP address successfully by using the ping command
from the AIX workstation from which cluster access is wanted.
1. OpenSSL must be installed for OpenSSH to work. Complete the following steps to install
OpenSSH on the AIX client:
a. You can obtain the installation images from the following websites:
• https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp
• http://sourceforge.net/projects/openssh-aix
b. Follow the instructions carefully because OpenSSL must be installed before SSH is
used.
2. Complete the following steps to generate an SSH key pair:
a. Run the cd command to browse to the /.ssh directory.
b. Run the ssh-keygen -t rsa command. The following message is displayed:
Generating public/private rsa key pair. Enter file in which to save the key
(//.ssh/id_rsa)
c. Pressing Enter uses the default file that is shown in parentheses. Otherwise, enter a
file name (for example, aixkey), and then press Enter. The following prompt is
displayed:
Enter a passphrase (empty for no passphrase)
d. When you use the CLI interactively, enter a passphrase because no other
authentication exists when you are connecting through the CLI. After you enter the
passphrase, press Enter. The following prompt is displayed:
Enter same passphrase again:
Enter the passphrase again. Press Enter.
e. A message is displayed indicating that the key pair was created. The private key file
has the name that was entered previously, for example, aixkey. The public key file has
the name that was entered previously with an extension of .pub, for example,
aixkey.pub.
The use of a passphrase: If you are generating an SSH key pair so that you can use
the CLI interactively, use a passphrase so that you must authenticate whenever you
connect to the cluster. You can have a passphrase-protected key for scripted usage, but
you must use the expect command or a similar command to have the passphrase
parsed into the ssh command.
Using IPv6: To remotely access the SVC clusters that are running IPv6, you are required
to run a supported web browser and have IPv6 configured on your local workstation.
164 Implementing the IBM System Storage SAN Volume Controller V7.4
4.5.1 Migrating a cluster from IPv4 to IPv6
As a prerequisite, enable and configure IPv6 on your local workstation. In our case, we
configured an interface with IPv4 and IPv6 addresses on the System Storage Productivity
Center, as shown in Example 4-2.
3. In the window that is shown in Figure 4-53, complete the following steps:
a. Select Show IPv6.
b. Enter an IPv6 address in the IP Address field.
c. Enter an IPv6 gateway in the Gateway field.
d. Enter an IPv6 prefix in the Subnet Mask/Prefix field. The Prefix field can have a value
of 0 - 127.
e. Click OK.
4. A confirmation window opens, as shown in Figure 4-54 on page 167. Click Apply
Changes.
166 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-54 Confirming the changes
5. The Change Management task is started on the server, as shown in Figure 4-55. Click
Close when the task completes.
6. Test the IPv6 connectivity by using the ping command from a cmd.exe session on your
local workstation, as shown in Example 4-3 on page 168.
7. Test the IPv6 connectivity to the cluster by using a compatible IPv6 and SVC web browser
on your local workstation.
8. Remove the IPv4 address in the SVC GUI that is accessing the same windows, as shown
in Figure 4-53 on page 166. Validate this change by clicking OK.
168 Implementing the IBM System Storage SAN Volume Controller V7.4
5
The ability to consolidate storage for attached open systems hosts provides the following
benefits:
Unified, easier storage management.
Increased utilization rate of the installed storage capacity.
Advanced Copy Services functions offered across storage systems from separate
vendors.
Consider only one kind of multipath driver for attached hosts.
Starting with SVC 5.1, IP-based Small Computer System Interface (iSCSI) connectivity was
introduced to provide an alternative method to attach hosts through an Ethernet local area
network (LAN). However, any inter-node communication within the SVC clustered system,
between the SVC and its back-end storage subsystems, and between the SVC clustered
systems solely takes place through FC. For more information about SVC iSCSI connectivity,
see 5.3, “iSCSI” on page 177.
Starting with SVC 6.4, Fibre Channel over Ethernet (FCoE) is supported on models
2145-CG8 and newer. Only 10 GbE lossless Ethernet or faster is supported.
Redundant paths to volumes can be provided for both SAN-attached and iSCSI-attached
hosts. Figure 5-1 on page 171 shows the types of attachments that are supported by SVC
release 7.4.
170 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-1 SVC host attachment overview
The SVC imposes no particular limit on the actual distance between the SVC nodes and host
servers. Therefore, a server can be attached to an edge switch in a core-edge configuration
and the SVC cluster is at the core of the fabric.
For host attachment, the SVC supports up to three inter-switch link (ISL) hops in the fabric,
which means that the server to the SVC can be separated by up to five FC links, four of which
can be 10 km long (6.2 miles) if longwave small form-factor pluggables (SFPs) are used.
The zoning capabilities of the SAN switch are used to create three distinct zones. SVC 7.4
supports 2 GBps, 4 GBps, 8 GBps, or 16 Gbps FC fabric, depending on the hardware
platform and on the switch where the SVC is connected. In an environment where you have a
fabric with multiple-speed switches, the preferred practice is to connect the SVC and the disk
storage system to the switch that is operating at the highest speed.
The SVC nodes contain shortwave SFPs; therefore, they must be within 300 m (984.25 feet)
of the switch to which they attach. Therefore, the configuration that is shown in Figure 5-2 on
page 172 is supported.
Table 5-1 shows the fabric type that can be used for communicating between hosts, nodes,
and RAID storage systems. These fabric types can be used at the same time.
In Figure 5-2, the optical distance between SVC Node 1 and Host 2 is slightly over 40 km
(24.85 miles).
To avoid latencies that lead to degraded performance, we suggest that you avoid ISL hops
whenever possible. That is, in an optimal setup, the servers connect to the same SAN switch
as the SVC nodes.
Remember the following limits when you are connecting host servers to an SVC:
Up to 256 hosts per I/O Group are supported, which results in a total of 1,024 hosts per
cluster.
If the same host is connected to multiple I/O Groups of a cluster, it counts as a host in
each of these groups.
A total of 512 distinct, configured host worldwide port names (WWPNs) are supported per
I/O Group.
This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is
generated for each iSCSI name) that are associated with all of the hosts that are
associated with a single I/O Group.
172 Implementing the IBM System Storage SAN Volume Controller V7.4
Access from a server to an SVC cluster through the SAN fabric is defined by using switch
zoning.
Consider the following rules for zoning hosts with the SVC:
Homogeneous HBA port zones
Switch zones that contain HBAs must contain HBAs from similar host types and similar
HBAs in the same host. For example, AIX and Microsoft Windows NT hosts must be in
separate zones, and QLogic and Emulex adapters must also be in separate zones.
Optional (n+2 redundancy): With four HBA ports, zone HBA ports to SVC ports 1:2 for a
total of eight paths.
Here, we use the term HBA port to describe the SCSI initiator and SVC port to describe
the SCSI target.
Important: The maximum number of host paths per LUN must not exceed eight.
The use of this schema provides four paths to one I/O Group for each host and helps to
maintain an equal distribution of host connections on SVC ports.
When possible, use the minimum number of paths that are necessary to achieve a sufficient
level of redundancy. For the SVC environment, no more than four paths per I/O Group are
required to accomplish this layout.
174 Implementing the IBM System Storage SAN Volume Controller V7.4
All paths must be managed by the multipath driver on the host side. If we assume that a
server is connected through four ports to the SVC, each volume is seen through eight paths.
With 125 volumes mapped to this server, the multipath driver must support handling up to
1,000 active paths (8 x 125).
For more configuration and operational information about the IBM Subsystem Device Driver
(SDD), see the Multipath Subsystem Device Driver User’s Guide, S7000303, which is
available at this website:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
For hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 5-4. You can combine this schema with the previous four-path
zoning schema.
Additionally, isolating remote replication traffic on dedicated ports is beneficial and ensures
that problems that affect the cluster-to-cluster interconnection do not adversely affect the
ports on the primary cluster and therefore affect the performance of workloads running on the
primary cluster.
We recommend the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 5-2 on page 176.
This recommendation provides the traffic isolation that you want and also simplifies migration
from existing configurations with only four ports, or even later migrations from 8-port or
12-port configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be
considered. However, these approaches do not appreciably increase availability of the
solution because the mean time between failures (MTBF) of the adapter is not significantly
less than that of the non-redundant node components.
Although alternate port mappings that spread traffic across HBAs can allow adapters to come
back online following a failure, they will not prevent a node from going offline temporarily to
reboot and attempt to isolate the failed adapter and then rejoin the cluster. Our
recommendation takes all these considerations into account with a view that the greater
complexity might lead to migration challenges in the future and the simpler approach is best.
176 Implementing the IBM System Storage SAN Volume Controller V7.4
5.3 iSCSI
The iSCSI protocol is a block-level protocol that encapsulates SCSI commands into TCP/IP
packets and therefore, uses an existing IP network instead of requiring the FC HBAs and
SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. The iSCSI
connectivity is a software feature that is provided by the SVC code.
The iSCSI-attached hosts can use a single network connection or multiple network
connections.
Restriction: Only hosts can iSCSI-attach to the SVC. The SVC back-end storage has to
use SAN.
Each SVC node is equipped with two onboard Ethernet network interface cards (NICs), which
can operate at a link speed of 10 Mbps, 100 Mbps, or 1000 Mbps. Both cards can be used to
carry iSCSI traffic. Each node’s NIC that is numbered 1 is used as the primary SVC cluster
management port. For optimal performance achievement, we advise that you use a 1 Gb
Ethernet connection between SVC-attached and iSCSI-attached hosts when the SVC node’s
onboard NICs are used.
Starting with the SVC 2145-CG8, an optional 10 Gbps 2-port Ethernet adapter (Feature Code
5700) is available. The required 10 Gbps shortwave SFPs are available as Feature Code
5711. If the 10 GbE option is installed, you cannot install any internal solid-state drives
(SSDs). The 10 GbE option is used solely for iSCSI traffic.
You can use the following types of iSCSI initiators in host systems:
Software initiator: Available for most operating systems, for example, AIX, Linux, and
Windows
Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI HBA.
For more information about the supported operating systems for iSCSI host attachment and
the supported iSCSI HBAs, see the following websites:
IBM SAN Volume Controller v7.4 Support Matrix:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM SAN Volume Controller Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp
An iSCSI target refers to a storage resource that is on an iSCSI server. It also refers to one of
potentially many instances of iSCSI nodes that are running on that server.
An iSCSI node is identified by its unique iSCSI name and is referred to as an iSCSI qualified
name (IQN). The purpose of this name is for the identification of the node only, not for the
node’s address. In iSCSI, the name is separated from the addresses. This separation allows
multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the
same iSCSI node to use multiple addresses.
An iSCSI host in the SVC is defined by specifying its iSCSI initiator names. The following
example shows an IQN of a Windows server’s iSCSI software initiator:
iqn.1991-05.com.microsoft:itsoserver01
During the configuration of an iSCSI host in the SVC, you must specify the host’s initiator
IQNs.
An alias string can also be associated with an iSCSI node. The alias allows an organization to
associate a string with the iSCSI name. However, the alias string is not a substitute for the
iSCSI name.
178 Implementing the IBM System Storage SAN Volume Controller V7.4
A host that is accessing SVC volumes through iSCSI connectivity uses one or more Ethernet
adapters or iSCSI HBAs to connect to the Ethernet network.
Both onboard Ethernet ports of an SVC node can be configured for iSCSI. If iSCSI is used for
host attachment, we advise that you dedicate Ethernet port one for the SVC management
and port two for iSCSI use. This way, port two can be connected to a separate network
segment or virtual LAN (VLAN) for iSCSI because the SVC does not support the use of VLAN
tagging to separate management and iSCSI traffic.
Note: Ethernet link aggregation (port trunking) or “channel bonding” for the SVC nodes’
Ethernet ports is not supported for the 1 Gbps ports.
For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, you
can define two IPv4 and two IPv6 addresses or iSCSI network portals.
5.3.4 iSCSI setup of the SAN Volume Controller and host server
You must perform the following procedure when you are setting up a host server for use as an
iSCSI initiator with the SVC volumes. The specific steps vary depending on the particular host
type and operating system that you use.
To set up your host server for use as an iSCSI software-based initiator with the SVC volumes,
complete the following steps. (The CLI is used in this example.)
1. Complete the following steps to set up your SVC cluster for iSCSI:
a. Select a set of IPv4 or IPv6 addresses for the Ethernet ports on the nodes that are in
the I/O Groups that use the iSCSI volumes.
b. Configure the node Ethernet ports on each SVC node in the clustered system by
running the cfgportip command.
c. Verify that you configured the node and the clustered system’s Ethernet ports correctly
by reviewing the output of the lsportip command and lssystemip command.
d. Use the mkvdisk command to create volumes on the SVC clustered system.
e. Use the mkhost command to create a host object on the SVC. It defines the host’s
iSCSI initiator to which the volumes are to be mapped.
f. Use the mkvdiskhostmap command to map the volume to the host object in the SVC.
2. Complete the following steps to set up your host server:
a. Ensure that you configured your IP interfaces on the server.
b. Ensure that your iSCSI HBA is ready to use, or install the software for the iSCSI
software-based initiator on the server, if needed.
c. On the host server, run the configuration methods for iSCSI so that the host server
iSCSI initiator logs in to the SVC clustered system and discovers the SVC volumes.
The host then creates host devices for the volumes.
After the host devices are created, you can use them with your host applications.
5.3.6 Authentication
The authentication of hosts is optional; by default, it is disabled. The user can choose to
enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication,
which involves sharing a CHAP secret between the cluster and the host. If the correct key is
not provided by the host, the SVC does not allow it to perform I/O to volumes. Also, you can
assign a CHAP secret to the cluster.
A concept that is used for handling the iSCSI IP address failover is called a clustered Ethernet
port. A clustered Ethernet port consists of one physical Ethernet port on each node in the
cluster. The clustered Ethernet port contains configuration settings that are shared by all of
these ports.
Figure 5-6 on page 181 shows an example of an iSCSI target node failover. This example
provides a simplified overview of what happens during a planned or unplanned node restart in
an SVC I/O Group. The example refers to the SVC nodes with no optional 10 GbE iSCSI
adapter installed.
180 Implementing the IBM System Storage SAN Volume Controller V7.4
The following numbered comments relate to the numbers in Figure 5-6:
1. During normal operation, one iSCSI node target node instance is running on each SVC
node. All of the IP addresses (IPv4/IPv6) that belong to this iSCSI target (including the
management addresses if the node acts as the configuration node) are presented on the
two ports (P1/P2) of a node.
2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal
(IPv4/IPv6) IP addresses that are defined on Port1/Port2 and the management
(IPv4/IPv6) IP addresses (if N1 acted as the configuration node), fail over to Port1/Port2 of
the partner node within the I/O Group, node N2. An iSCSI initiator that is running on a
server runs a reconnect to its iSCSI target, that is, the same IP addresses that are
presented now by a new node of the SVC cluster.
3. When the node (N1) finishes its restart, the iSCSI target node (including its IP addresses)
that is running on N2 fails back to N1. Again, the iSCSI initiator that is running on a server
runs a reconnect to its iSCSI target. The management addresses do not fail back. N2
remains in the role of the configuration node for this cluster.
The following commands are new commands that are used for managing iSCSI IP addresses:
The lsportip command lists the iSCSI IP addresses that are assigned for each port on
each node in the cluster.
The cfgportip command assigns an IP address to each node’s Ethernet port for iSCSI
I/O.
The following commands are new commands that are used for managing the cluster IP
addresses:
The lssystemip command returns a list of the cluster management IP addresses that are
configured for each port.
The chsystemip command modifies the IP configuration parameters for the cluster.
The parameters for remote services (SSH and web services) remain associated with the
cluster object. During an SVC code upgrade, the configuration settings for the clustered
system are applied to the node Ethernet port 1.
For iSCSI-based access, the use of redundant network connections and separating iSCSI
traffic by using a dedicated network or virtual LAN (VLAN) prevents any NIC, switch, or target
port failure from compromising the host server’s access to the volumes.
Because both onboard Ethernet ports of an SVC node can be configured for iSCSI, we
advise that you dedicate Ethernet port 1 for SVC management and port 2 for iSCSI usage. By
using this approach, port 2 can be connected to a dedicated network segment or VLAN for
iSCSI. Because the SVC does not support the use of VLAN tagging to separate management
and iSCSI traffic, you can assign the correct LAN switch port to a dedicated VLAN to separate
SVC management and iSCSI traffic.
AIX-specific information: In this section, the IBM System p® information applies to all
AIX hosts that are listed on the SVC interoperability support website, including
IBM System i partitions and IBM JS blades.
182 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Install the 2145 host attachment support package. For more information, see 5.4.5,
“Installing the 2145 host attachment support package” on page 185.
6. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM).
7. Perform the logical configuration on the SVC to define the host, volumes, and host
mapping.
8. Run the cfgmgr command to discover and configure the SVC volumes.
Important: It is vital that you regularly check the listed websites for any updates.
For more information and device driver support, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
Important: The maximum number of FC ports that are supported in a single host (or
logical partition) is four. These ports can be four single-port adapters or two dual-port
adapters or a combination if the maximum number of ports that attach to the SVC does not
exceed four.
Complete the following steps to configure your host system to use the fast fail and dynamic
tracking attributes:
1. Run the following command to set the FC SCSI I/O Controller Protocol Device to each
adapter:
chdev -l fscsi0 -a fc_err_recov=fast_fail
This command was for adapter fscsi0. Example 5-1 on page 184 shows the command for
both adapters on a system that is running IBM AIX V6.1.
2. Run the following command to enable dynamic tracking for each FC device:
chdev -l fscsi0 -a dyntrk=yes
This example command was for adapter fscsi0. Example 5-2 shows the command for
both adapters in IBM AIX V6.1.
Note: The fast fail and dynamic tracking attributes do not persist through an adapter
delete and reconfigure operation. Therefore, if the adapters are deleted and then
configured back into the system, these attributes are lost and must be reapplied.
You can display the WWPN, with other attributes, including the firmware level, by using the
command that is shown in Example 5-4. The WWPN is represented as the Network Address.
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
184 Implementing the IBM System Storage SAN Volume Controller V7.4
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
Complete the following steps to install the host attachment support package:
1. See the following website:
http://www.ibm.com/servers/storage/support/software/sdd/downloading.html
2. Select Host Attachment for SDDPCM on AIX.
3. Download the appropriate host attachment package archive for your AIX version; the file
set that is contained in the package is devices.fcp.disk.ibm.mpio.rte.
4. Follow the instructions that are provided on the website and the readme files to install the
script.
The AIX MPIO device driver automatically discovers, configures, and makes available all
storage device paths. SDDPCM then manages these paths to provide the following functions:
High availability and load balancing of storage I/O
Automatic path-failover protection
Concurrent download of supported storage devices’ licensed machine code
Prevention of a single-point failure
The AIX MPIO device driver with SDDPCM enhances the data availability and I/O load
balancing of SVC volumes.
SDD: For AIX hosts, use the SDDPCM as the multipath software over the existing SDD.
Although SDD is still supported, a description of SDD is beyond the scope of this
publication. Since AIX version 6.1, SDD is no longer available for multipathing.
Check the driver readme file to ensure that your AIX system meets all prerequisites.
Example 5-5 shows the appropriate version of SDDPCM that is downloaded into the
/tmp/sddpcm directory. From here, we extract it and run the inutoc command, which
generates a dot.toc file that is needed by the installp command before SDDPCM is
installed. Finally, we start the installp command, which installs SDDPCM onto this AIX host.
Example 5-6 shows the lslpp command that can be used to check the version of SDDPCM
that is installed.
For more information about how to enable the SDDPCM web interface, see 5.12, “Using the
SDDDSM, SDDPCM, and SDD web interface” on page 238.
186 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-7 Status of AIX host system Atlantic
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
# lsvg
rootvg
The following command probes the devices sequentially across all installed adapters:
# cfgmgr -vS
The lsdev command lists the three newly configured hdisks that are represented as
MPIO FC 2145 devices, as shown in Example 5-10.
Now, you can use the mkvg command to create a VG with the three newly configured hdisks,
as shown in Example 5-11 on page 189.
188 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-11 Running the mkvg command
# mkvg -y itsoaixvg hdisk3
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
# mkvg -y itsoaixvg1 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg1
# mkvg -y itsoaixvg2 hdisk5
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg2
The lspv output now shows the new VG label on each of the hdisks that were included in the
VGs, as shown in Example 5-12.
Example 5-13 SDDPCM commands that are used to check the availability of the adapters
# pcmpath query adapter
Active Adapters :2
Adpt# Name State Mode Select Errors Paths Active
0 fscsi1 NORMAL ACTIVE 407 0 6 6
1 fscsi2 NORMAL ACTIVE 425 0 6 6
The pcmpath query device command displays the current state of adapters. Example 5-14
shows the path’s State and Mode for each of the defined hdisks. Both adapters show the
optimal status of State=Open and Mode=Normal. Additionally, an asterisk (*) that is displayed
next to a path indicates an inactive path that is configured to the non-preferred SVC node.
Example 5-14 SDDPCM commands that are used to check the availability of the devices
5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg VG is created with hdisk3. A logical volume is created by using the VG. Then,
the testlv1 file system is created and mounted, as shown in Example 5-15.
190 Implementing the IBM System Storage SAN Volume Controller V7.4
The following steps show how to expand a volume on an AIX host when the volume is on the
SVC:
1. Display the current size of the SVC volume by using the SVC CLI command lsvdisk
<VDisk_name>. The capacity of the volume, as seen by the host, is displayed in the
capacity field of the lsvdisk output in GBs.
2. The corresponding AIX hdisk can be identified by matching the vdisk_UID from the
lsvdisk output with the SERIAL field of the pcmpath query device output.
3. Display the capacity that is configured in AIX by using the lspv hdisk command. The
capacity is shown in the TOTAL PPs field in MBs.
4. To expand the capacity of the SVC volume, use the expandvdisksize command.
5. After the capacity of the volume is expanded, AIX must update its configured capacity. To
start the capacity update on AIX, use the chvg -g vg_name command, where vg_name is
the VG in which the expanded volume is found.
If AIX does not return any messages, the command was successful and the volume
changes in this VG were saved.
If AIX cannot see any changes in the volumes, it returns an explanatory message.
6. Display the new capacity that was configured by AIX by using the lspv hdisk command.
The capacity is shown in the TOTAL PPs field in MBs.
5.4.11 Running SAN Volume Controller commands from an AIX host system
To run CLI commands, you must install and prepare the SSH client system on the AIX host
system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also
need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for IBM Power
Systems™:
http://ibm.com/systems/power/software/aix/linux/toolbox/download.html
The AIX installation images from IBM developerWorks are available at this website:
http://sourceforge.net/projects/openssh-aix
Important: With Windows 2012, you can use native Microsoft device drivers, but we
strongly advise that you install IBM SDDDSM drivers.
Before you attach the SVC to your host, ensure that all of the following requirements are
fulfilled:
Check all prerequisites that are provided in section 2.0 of the SDDSM readme file.
Check the LUN limitations for your host system. Ensure that enough FC adapters are
installed in the server to handle the total number of LUNs that you want to attach.
192 Implementing the IBM System Storage SAN Volume Controller V7.4
9. Check the disk timeout on Windows Server, as described in 5.5.5, “Changing the disk
timeout on Windows Server” on page 193.
10.Install and configure SDDDSM.
11.Restart the Windows Server host system.
12.Configure the host, volumes, and host mapping in the SVC.
13.Use Rescan disk in Computer Management of the Windows Server to discover the
volumes that were created on the SVC.
On this page, browse to section V7.4.x, select Supported Hardware, Device Driver,
Firmware and Recommended Software Levels, and then search for Windows.
At this website, you also can find the hardware list for supported HBAs and the driver levels
for Windows. Check the supported firmware and driver level for your HBA and follow the
manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA.
Most manufacturers’ driver readme files list the instructions for the Windows registry
parameters that must be set for the HBA driver.
Also, check the documentation that is provided for the server system for the installation
guidelines of FC HBAs regarding the installation in certain PCI(e) slots, and so on.
The detailed configuration settings that you must make for the various vendors’ FC HBAs are
available in the SVC Information Center by selecting Installing → Host attachment → Fibre
Channel host attachments → Hosts running the Microsoft Windows Server operating
system.
On your Windows Server hosts, complete the following steps to change the disk I/O timeout
value to 60 in the Windows registry:
1. In Windows, click Start, and then select Run.
2. In the dialog text box, enter regedit and press Enter.
3. In the registry browsing tool, locate the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.
4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the
value to 60, as shown in Figure 5-7 on page 194.
MPIO is not installed with the Windows operating system, by default. Instead, storage
vendors must pack the MPIO drivers with their own DSMs. IBM SDDDSM is the IBM multipath
I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is
designed specifically to support IBM storage devices on Windows Server 2008 (R2) and
Windows 2012 servers.
The intention of MPIO is to achieve better integration of multipath storage with the operating
system. It also allows the use of multipathing in the SAN infrastructure during the boot
process for SAN boot hosts.
No SDDDSM support exists for Windows Server 2000 because SDDDSM requires the
STORPORT version of the HBA device drivers. Table 5-3 on page 195 lists the SDDDSM
driver levels that are supported at the time of this writing.
194 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 5-3 Currently supported SDDDSM driver levels
Windows operating system SDD level
For more information about the levels that are available, see this website:
http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM
After you download the appropriate archive (.zip file) from this URL, extract it to your local
hard disk and start setup.exe to install SDDDSM. A command prompt window opens, as
shown in Figure 5-8. Confirm the installation by entering Y.
After the setup completes, enter Y again to confirm the reboot request, as shown in
Figure 5-9.
196 Implementing the IBM System Storage SAN Volume Controller V7.4
5.5.7 Attaching SVC volumes to Microsoft Windows Server 2008 R2 and to
Windows Server 2012
Create the volumes on the SVC and map them to the Windows Server 2008 R2 or 2012 host.
In this example, we mapped three SVC disks to the Windows Server 2008 R2 host that is
named Diomede, as shown in Example 5-16.
Complete the following steps to use the devices on your Windows Server 2008 R2 host:
1. Click Start → Run.
2. Run the diskmgmt.msc command, and then click OK. The Disk Management window
opens.
3. Select Action → Rescan Disks, as shown in Figure 5-12.
4. The SVC disks now appear in the Disk Management window, as shown in Figure 5-13 on
page 198.
After you assign the SVC disks, they are also available in Device Manager. The three
assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices
in the Device Manager, as shown in Figure 5-14.
5. To check that the disks are available, select Start → All Programs → Subsystem Device
Driver DSM, and then click Subsystem Device Driver DSM, as shown in Figure 5-15 on
page 199. The SDDDSM command-line utility appears.
198 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-15 Windows Server 2008 R2 Subsystem Device Driver DSM utility
6. Run the datapath query device command and press Enter. This command displays all of
the disks and the available paths, including their states, as shown in Example 5-17.
Total Devices : 3
C:\Program Files\IBM\SDDDSM>
7. Right-click the disk in Disk Management and then select Online to place the disk online,
as shown in Figure 5-16.
10.Mark all of the disks that you want to initialize and then click OK, as shown in Figure 5-18
on page 201.
200 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-18 Windows Server 2008 R2: Initialize Disk
11.Right-click the deallocated disk space and then select New Simple Volume, as shown in
Figure 5-19.
15.Enter a volume label and then click Next, as shown in Figure 5-22.
16.Click Finish. Repeat steps 9 - 16 for every SVC disk on your host system (Figure 5-23 on
page 203).
202 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-23 Windows Server 2008 R2: Disk Management
You can expand a volume in the SVC cluster, even if it is mapped to a host. Certain operating
systems, such as Windows Server since version 2000, can handle the volumes that are
expanded even if the host has applications running.
Use the updated DiskPart version for Windows Server 2003, which is available from the
Microsoft Knowledge Base at this website:
http://support.microsoft.com/kb/923076/
If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all but one MSCS cluster node. Also, you must stop the applications in the resource that
access the volume to be expanded before the volume is expanded. Applications that are
running in other resources can continue to run. After the volume is expanded, start the
applications and the resource, and then restart the other nodes in the MSCS.
To expand a volume in use on a Windows Server host, you use the Windows DiskPart utility.
DiskPart was developed by Microsoft to ease the administration of storage on Windows hosts.
DiskPart is a command-line interface (CLI) that you can use to manage disks, partitions, and
volumes by using scripts or direct input on the command line. You can list disks and volumes,
select them, and after selecting them, get more detailed information, create partitions, extend
volumes, and so on. For more information about DiskPart, see this website:
http://www.microsoft.com
For more information about expanding the partitions of a cluster-shared disk, see this
website:
http://support.microsoft.com/kb/304736
Next, we show an example of how to expand a volume from the SVC on a Windows Server
2008 host.
To list a volume size, use the svcinfo lsvdisk <VDisk_name> command. This command
provides the volume size information for the Senegal_bas0001 volume before expanding the
volume.
Here, we can see that the capacity is 10 GB, and we can see the value of the vdisk_UID. To
see on which vpath this volume is on the Windows Server 2008 host, we use the datapath
query device SDD command on the Windows host (Figure 5-24).
To see the size of the volume on the Windows host, we use Disk Management, as shown in
Figure 5-24.
This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use
the svctask expandvdisksize command to increase the capacity on the volume. In this
example, we expand the volume by 1 GB, as shown in Example 5-18 on page 205.
204 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-18 svctask expandvdisksize command
IBM_2145:ITSO_SVC1:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 11.00GB
real_capacity 11.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
To check that the volume was expanded, we use the svcinfo lsvdisk command. In
Example 5-18, we can see that the Senegal_bas0001 volume capacity was expanded to
11 GB.
After a Disk Rescan in Windows is performed, you can see the new deallocated space in
Windows Disk Management, as shown in Figure 5-25 on page 206.
This window shows that Disk1 now has 1 GB deallocated new capacity. To make this capacity
available for the file system, use the following commands, as shown in Example 5-19:
diskpart: Starts DiskPart in a DOS prompt
list volume: Shows you all available volumes
select volume: Selects the volume to expand
detail volume: Displays details for the selected volume, including the deallocated
capacity
extend: Extends the volume to the available deallocated space
206 Implementing the IBM System Storage SAN Volume Controller V7.4
* Disk 1 Online 11 GB 1020 MB
Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
DISKPART> extend
Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
After the volume is extended, the detail volume command shows no free capacity on the
volume anymore. The list volume command shows the file system size. The Disk
Management window also shows the new disk size, as shown in Figure 5-26.
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SVC volume. The new space appears as deallocated space at
the end of the disk.
Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data. This operation is disruptive for the data because of a change in the
position of the logical block address (LBA) on the disks.
When the host mapping is removed, perform a rescan for the disk. Disk Management on the
server removes the disk, and the vpath goes into the status of CLOSE on the server. Verify
these actions by running the datapath query device SDD command, but the vpath that is
closed is first removed after a reboot of the server.
In the following examples, we show how to remove an SVC volume from a Windows server.
We show this example on a Windows Server 2008 operating system, but the steps also apply
to Windows Server 2008 R2 and Windows Server 2012.
Figure 5-24 on page 204 shows the Disk Management before removing the disk.
We now remove Disk 1. To find the correct volume information, we find the Serial/UID number
by using SDD, as shown in Example 5-20.
Example 5-20 Removing the SVC disk from the Windows server
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
208 Implementing the IBM System Storage SAN Volume Controller V7.4
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
Knowing the Serial/UID of the volume and that the host name is Senegal, we identify the host
mapping to remove by running the lshostvdiskmap command on the SVC. Then, we remove
the actual host mapping, as shown in Example 5-21.
SDDDSM also shows us that the status for all paths to Disk1 changed to CLOSE because the
disk is not available, as shown in Example 5-22 on page 211.
210 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-22 SDD: Closed path
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
The disk (Disk1) is now removed from the server. However, to remove the SDDDSM
information about the disk, you must reboot the server at a convenient time.
We can install the PuTTY SSH client software on a Windows host by using the PuTTY
installation program. You can download PuTTY from this website:
http://www.chiark.greenend.org.uk/~sgtatham/putty/
Cygwin software features an option to install an OpenSSH client. You can download Cygwin
from this website:
http://www.cygwin.com/
In this section, we describe how to install VSS. The following operating system versions are
supported:
Windows Server 2003 with Service Pack (SP) 2 (x86 and x86_64)
Windows Server 2008 with SP2 (x86 and x86_64)
Windows Server 2008 R2 with SP1
Windows Server 2012
IBM System Storage Support for Microsoft VSS (IBM VSS) is installed on the Windows host.
VSS maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of
volumes. These pools are implemented as virtual host systems on the SVC.
212 Implementing the IBM System Storage SAN Volume Controller V7.4
5.7.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install IBM VSS and
Virtual Disk Service software on the Windows operating system:
SVC with FlashCopy enabled
IBM System Storage Support for Microsoft VSS and Virtual Disk Service (VDS) software
During the installation, you are prompted to enter information about the SVC Master Console,
including the location of the truststore file. The truststore file is generated during the
installation of the Master Console. You must copy this file to a location that is accessible to the
IBM System Storage hardware provider on the Windows server.
When the installation is complete, the installation program might prompt you to restart the
system. Complete the following steps to install the IBM System Storage hardware provider on
the Windows server:
1. Download the installation archive from the following IBM website and extract it to a
directory on the Windows server where you want to install IBM System Storage Support
for VSS:
http://ibm.com/support/docview.wss?uid=ssg1S4000833
2. Log in to the Windows server as an administrator and browse to the directory where the
installation files were downloaded.
3. Run the installation program by double-clicking IBMVSSVDS.exe.
4. The Welcome window opens, as shown in Figure 5-28. Click Next to continue with the
installation.
Figure 5-28 IBM System Storage Support for VSS and VDS installation: Welcome
Figure 5-30 IBM System Storage Support for VSS and VDS installation
7. The next window prompts you to select a Common Information Module (CIM) server,
which is the SVC. In contrast with the older SVC versions, the configuration node now
provides the CIM service on the cluster IP address. Select the correct, automatically
discovered CIM server, or select Enter CIM Server address manually, and then click
Next, as shown in Figure 5-31 on page 215.
214 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-31 Select CIM Server
8. The Enter CIM Server Details window opens. Enter the following information in the fields
(Figure 5-32):
a. The CIM Server Address field is propagated with the URL according to the CIM server
address that was chosen in the previous step.
b. In the CIM User field, enter the user name that the IBM VSS software used to access
to the SVC.
c. In the CIM Password field, enter the password for the SVC user name that was
provided in the previous step. Click Next.
9. In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to
restart the system, as shown in Figure 5-33 on page 216.
Additional information: If these settings change after installation, you can use the
ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services
software with the new settings.
If you do not have the CIM Agent server, port, or user information, contact your CIM Agent
administrator.
Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware
Provider'
216 Implementing the IBM System Storage SAN Volume Controller V7.4
Provider type: Hardware
Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b}
Version: 4.2.1.0816
If you can successfully perform all of these verification tasks, the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was
successfully installed on the Windows server.
When a shadow copy is created, the IBM System Storage hardware provider selects a
volume in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This process protects the volume from being overwritten by other Volume Shadow Copy
Service users.
To successfully perform a Volume Shadow Copy Service operation, enough volumes must be
available that are mapped to the free pool. The volumes must be the same size as the source
volumes.
Use the SVC GUI or SVC CLI to complete the following steps:
1. Create a host for the free pool of volumes. You can use the default name VSS_FREE or
specify another name. Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeros), as shown in Example 5-24.
2. Create a virtual host for the reserved pool of volumes. You can use the default name
VSS_RESERVED or specify another name. Associate the host with the WWPN
5000000000000001 (14 zeros), as shown in Example 5-25.
3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be
mapped to any other hosts. If you have volumes that are created for the free pool of
volumes, you must assign the volumes to the free pool.
5. Verify that the volumes were mapped. If you do not use the default WWPNs
5000000000000000 and 5000000000000001, you must configure the IBM System
Storage hardware provider with the WWPNs, as shown in Example 5-27.
Configuration:
set user <CIMOM user name>
set password <CIMOM password>
set trace [0-7]
set trustpassword <trustpassword>
set truststore <truststore location>
set usingSSL <YES | NO>
set vssFreeInitiator <WWPN>
set vssReservedInitiator <WWPN>
set FlashCopyVer <1 | 2> (only applies to ESS)
set cimomPort <PORTNUM>
set cimomHost <Hostname>
set namespace <Namespace>
set targetSVC <svc_cluster_ip>
set backgroundCopy <0-100>
218 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 5-4 lists the available commands.
ibmvcfg set username This command sets the user ibmvcfg set username Dan
<username> name to access the SVC
Console.
ibmvcfg set password This command sets the ibmvcfg set password
<password> password of the user name that mypassword
accesses the SVC Console.
ibmvcfg set targetSVC This command specifies the IP set targetSVC 9.43.86.120
<ipaddress> address of the SVC on which
the volumes are located when
volumes are moved to and from
the free pool with the ibmvcfg
add and ibmvcfg rem
commands. The IP address is
overridden if you use the -s flag
with the ibmvcfg add and
ibmvcfg rem commands.
ibmvcfg set usingSSL This command specifies ibmvcfg set usingSSL yes
whether to use the Secure
Sockets Layer (SSL) protocol to
connect to the SVC Console.
ibmvcfg set cimomPort This command specifies the ibmvcfg set cimomPort 5999
<portnum> SVC Console port number. The
default value is 5999.
ibmvcfg set cimomHost This command sets the name of ibmvcfg set cimomHost
<server name> the server where the SVC cimomserver
Console is installed.
ibmvcfg set namespace This command specifies the ibmvcfg set namespace
<namespace> namespace value that the \root\ibm
Master Console uses. The
default value is \root\ibm.
ibmvcfg set vssFreeInitiator This command specifies the ibmvcfg set vssFreeInitiator
<WWPN> WWPN of the host. The default 5000000000000000
value is 5000000000000000.
Modify this value only if a host
exists in your environment with
a WWPN of
5000000000000000.
ibmvcfg listvols all This command lists all of the ibmvcfg listvols all
volumes, including information
about the size, location, and
host mappings.
ibmvcfg listvols free This command lists the ibmvcfg listvols free
volumes that are in the free
pool.
ibmvcfg listvols unassigned This command lists the ibmvcfg listvols unassigned
volumes that are currently not
mapped to any hosts.
ibmvcfg add -s ipaddress This command adds one or ibmvcfg add vdisk12 ibmvcfg
more volumes to the free pool add 600507 68018700035000000
of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.
ibmvcfg rem -s ipaddress This command removes one or ibmvcfg rem vdisk12 ibmvcfg
more volumes from the free rem 600507 68018700035000000
pool of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.
220 Implementing the IBM System Storage SAN Volume Controller V7.4
5.8.1 Configuring the Linux host
Complete the following steps to configure the Linux host:
1. Use the latest firmware levels on your host system.
2. Install the HBA or HBAs on the Linux server, as described in 5.5.4, “Installing and
configuring the host adapter” on page 193.
3. Install the supported HBA driver or firmware and upgrade the kernel, if required.
4. Connect the Linux server FC host adapters to the switches.
5. Configure the switches (zoning), if needed.
6. Install SDD for Linux, as described in 5.8.5, “Multipathing in Linux” on page 222.
7. Configure the host, volumes, and host mapping in the SVC.
8. Rescan for LUNs on the Linux server to discover the volumes that were created on the
SVC.
This website provides the hardware list for supported HBAs and device driver levels for Linux.
Check the supported firmware and driver level for your HBA, and follow the manufacturer’s
instructions to upgrade the firmware and driver levels for each type of HBA.
Often, the automatic update process also upgrades the system to the latest kernel level. Old
hosts that are still running SDD must turn off the automatic update of kernel levels because
certain drivers that are supplied by IBM, such as SDD, depend on a specific kernel and cease
to function on a new kernel. Similarly, HBA drivers must be compiled against specific kernels
to function optimally. By allowing automatic updates of the kernel, you risk affecting your host
systems unexpectedly.
In SLES10, the multipath drivers and tools are installed, by default. However, for RHEL5, the
user must explicitly choose the multipath components during the operating system installation
to install them. Each of the attached SVC LUNs has a special device file in the Linux /dev
directory.
Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC
allows. The following website provides the current information about the maximum
configuration for the SVC:
http://www.ibm.com/storage/support/2145
222 Implementing the IBM System Storage SAN Volume Controller V7.4
Enable MPIO for RHEL by running the following commands:
modprobe dm-multipath
modprobe dm-round-robin
service multipathd start
chkconfig multipathd on
Example 5-29 shows the commands that are run on a Red Hat Enterprise Linux 6.3
operating system.
Note: You can download example multipath.conf files from the following IBM
Subsystem Device Driver for Linux website:
http://ibm.com/support/docview.wss?uid=ssg1S4000107#DM
4. Run the multipath -dl command to see the MPIO configuration. You see two groups with
two paths each. All paths must have the state [active][ready], and one group shows
[enabled].
224 Implementing the IBM System Storage SAN Volume Controller V7.4
Disk /dev/sdh: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
[root@palau scsi]# shutdown -r now
6. Create a file system by running the mkfs command, as shown in Example 5-33.
7. Create a mount point and mount the drive, as shown in Example 5-34.
226 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Configure the host, volumes, and host mapping in the SVC, as described in 5.9.7,
“Attaching VMware to volumes” on page 229.
For more information about supported HBAs for older ESX versions, see this website:
http://ibm.com/storage/support/2145
Mostly, the supported HBA device drivers are included in the ESX server build. However, for
various newer storage adapters, you might be required to load more ESX drivers. Check the
following VMware hardware compatibility list (HCL) if you must load a custom driver for your
adapter:
http://www.vmware.com/resources/compatibility/search.php
After the HBAs are installed, load the default configuration of your FC HBAs. You must use
the same model of HBA with the same firmware in one server. Configuring Emulex and
QLogic HBAs to access the same target in one server is not supported.
If you are unfamiliar with the VMware environment and the advantages of storing virtual
machines and application data on a SAN, it is useful to get an overview about VMware
products before you continue.
If you run an ESX host, for example, with several virtual machines, it makes sense to use one
“slow” array. For example, you can use one slow array for Print and Active Directory Services
guest operating systems without high I/O, and another fast array for database guest operating
systems.
The use of more and smaller volumes has the following advantages:
Separate I/O characteristics of the guest operating systems
More flexibility (the multipathing policy and disk shares are set per volume)
Microsoft Cluster Service requires its own volume for each cluster disk resource
For more information about designing your VMware infrastructure, see the following websites:
http://www.vmware.com/vmtn/resources/
http://www.vmware.com/resources/techresources/1059
Guidelines: ESX server hosts that use shared storage for virtual machine failover or load
balancing must be in the same zone. You can have only one VMFS volume per volume.
To make these changes on your system (Example 5-35), complete the following steps:
1. Back up the /etc/vmware/esx.cof file.
2. Open the /etc/vmware/esx.cof file for editing.
The file includes a section for every installed SCSI device.
3. Locate your SCSI adapters and edit the previously described parameters.
4. Repeat this process for every installed HBA.
228 Implementing the IBM System Storage SAN Volume Controller V7.4
5.9.6 Multipathing in ESX
The VMware ESX server performs native multipathing. You do not need to install another
multipathing driver, such as SDD.
Example 5-36 shows that the host Nile is logged in to the SVC with two HBAs.
Then, the SCSI Controller Type must be set in VMware. By default, the ESX server disables
the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS
file at the same time. See Figure 5-34 on page 230.
But, in many configurations, such as configurations for high availability, the virtual machines
must share the VMFS file to share a disk.
Complete the following steps to set the SCSI Controller Type in VMware:
1. Log in to your Infrastructure Client, shut down the virtual machine, right-click it, and select
Edit settings.
2. Highlight the SCSI Controller, and select one of the following available settings, depending
on your configuration:
– None: Disks cannot be shared by other virtual machines.
– Virtual: Disks can be shared by virtual machines on the same server.
– Physical: Disks can be shared by virtual machines on any server.
Click OK to apply the setting.
3. Create your volumes on the SVC. Then, map them to the ESX hosts.
Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file
must be visible to every ESX host that can host the virtual machine.
In the SVC, select Allow the virtual disks to be mapped even if they are already
mapped to a host.
The volume must have the same SCSI ID on each ESX host.
For this configuration, we created one volume and mapped it to our ESX host, as shown in
Example 5-37.
ESX does not automatically scan for SAN changes (except when rebooting the entire ESX
server). If you made any changes to your SVC or SAN configuration, complete the following
steps:
1. Open your VMware Infrastructure Client.
2. Select the host.
3. In the Hardware window, choose Storage Adapters.
4. Click Rescan.
230 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-35 VMware add datastore
Now, the created VMFS datastore appears in the Storage window, as shown in Figure 5-36.
You see the details for the highlighted datastore. Check whether all of the paths are available
and that the Path Selection is set to Round Robin.
If not all of the paths are available, check your SAN and storage configuration. After the
problem is fixed, select Refresh to perform a path rescan. The view is updated to the new
configuration.
The preferred practice is to use the Round Robin Multipath Policy for the SVC. If you need to
edit this policy, complete the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change.
Now, your VMFS datastore is created and you can start using it for your guest operating
systems. Round Robin distributes the I/O load across all available paths. If you want to use a
fixed path, the policy setting Fixed also is supported.
For more information about performing this task, see 5.5.5, “Changing the disk timeout on
Windows Server” on page 193.
Note: Before you perform the steps that are described here, back up your data.
232 Implementing the IBM System Storage SAN Volume Controller V7.4
Complete the following steps to extend a volume:
1. Expand the volume by running the svctask expandvdisksize -size 1 -unit gb
<VDiskname> command, as shown in Example 5-38.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 60.00GB
real_capacity 60.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 65.00GB
real_capacity 65.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>
234 Implementing the IBM System Storage SAN Volume Controller V7.4
11.Select the new free space, and then click Next.
12.Click Next.
13.Click Finish.
The VMFS volume is now extended and the new space is ready for use.
For more information about supported software and driver levels, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
SDD uses a round-robin algorithm when failing over paths. That is, SDD tries the next known
preferred path. If this method fails and all preferred paths are tried, it uses a round-robin
algorithm on the non-preferred paths until it finds a path that is available. If all paths are
unavailable, the volume goes offline. Therefore, it can take time to perform path failover when
multiple paths go offline.
SDD under Solaris performs load balancing across the preferred paths, where appropriate.
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA, and SFRAC V4.1/5.0, and Solaris with
Sun Cluster V3.1/3.2 are supported at the time of this writing.
236 Implementing the IBM System Storage SAN Volume Controller V7.4
5.11.1 Operating system versions and maintenance levels
At the time of this writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).
SDD is aware of the preferred paths that the SVC sets per volume. SDD uses a round-robin
algorithm when it fails over paths. That is, it tries the next known preferred path. If this method
fails and all preferred paths were tried, it uses a round-robin algorithm on the non-preferred
paths until it finds a path that is available. If all paths are unavailable, the volume goes offline.
Therefore, it can take time to perform path failover when multiple paths go offline.
SDD under HP-UX performs load balancing across the preferred paths where appropriate.
When you are creating a VG, specify the primary path that you want HP-UX to use when it is
accessing the Physical Volume (PV) that is presented by the SVC. Only this path is used to
access the PV if it is available, no matter what the SVC’s preferred path is to that volume.
Therefore, be careful when you are creating VGs so that the primary links to the PVs (and
load) are balanced over both HBAs, FC switches, SVC nodes, and so on.
When you are extending a VG to add alternative paths to the PVs, the order in which you add
these paths is HP-UX’s order of preference if the primary path becomes unavailable.
Therefore, when you are extending a VG, the first alternative path that you add must be from
the same SVC node as the primary path to avoid unnecessary node failover because of an
HBA, FC link, or FC switch failure.
When you are editing your Cluster Configuration ASCII file, ensure that the variable
FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node in your
cluster to ensure redundancy. For example, when you are configuring a two-node HP cluster,
ensure that FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and
through a separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.
To accommodate this behavior, the SVC supports a “type” that is associated with a host. This
type can be set by using the svctask mkhost command and modified by using the svctask
chhost command. You can set the type to generic, which is the default for HP-UX.
When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, the
SVC behaves in the following way:
Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 by using Peripheral Device
Addressing, it is reported as Peripheral Device Type 0Ch (controller).
When any command other than an inquiry is sent to LUN 0 by using Peripheral Device
Addressing, the SVC responds as an unmapped LUN 0 normally responds.
When an inquiry is sent to LUN 0 by using Flat Space Addressing, it is reported as
Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh
Unknown Device Type.
When an inquiry is sent to an unmapped LUN that is not LUN 0 by using Peripheral Device
Addressing, the Peripheral qualifier that is returned is 001b and the Peripheral Device type
is 1Fh (unknown or no device type). This response is in contrast to the behavior for
generic hosts, where peripheral Device Type 00h is returned.
For more information about the command documentation for the various operating systems,
see the Multipath Subsystem Device Driver User’s Guide, S7000303:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
238 Implementing the IBM System Storage SAN Volume Controller V7.4
You can also configure SDDDSM to offer a web interface that provides basic information.
Before this configuration can work, you must configure the web interface. SDDSRV does not
bind to any TCP/IP port, by default, but it allows port binding to be enabled or disabled
dynamically.
For all platforms except Linux, the multipath driver package includes an sddsrv.conf
template file that is named the sample_sddsrv.conf file. On all UNIX platforms except Linux,
the sample_sddsrv.conf file is in the /etc directory. On Windows platforms, the
sample_sddsrv.conf file is in the directory in which SDDDSM was installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as the sample_sddsrv.conf file by copying it and naming the copied file sddsrv.conf. You can
then dynamically change the port binding by modifying the parameters in the sddsrv.conf file
and changing the values of Enableport and Loopbackbind to True.
Figure 5-38 shows the start window of the multipath driver web interface.
For more information about SDDDSM configuration, see the IBM System Storage Multipath
Subsystem Device Driver User’s Guide, S7000303, which is available from this website:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
For more information about host attachment and storage subsystem attachment, and
troubleshooting, see the IBM SAN Volume Controller Knowledge Center at this website:
http://www-01.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp
240 Implementing the IBM System Storage SAN Volume Controller V7.4
6
We also introduce and demonstrate the SVC support of the nondisruptive movement of
volumes between SVC I/O Groups, which is referred to as nondisruptive volume move or
multinode volume access.
For more information about the migrateexts command parameters, see the following
resources:
The SVC command-line interface help by entering the following command:
help migrateexts
The IBM System Storage SAN Volume Controller Command-Line Interface User’s Guide,
GC27-2287
242 Implementing the IBM System Storage SAN Volume Controller V7.4
When this command is run, a number of extents are migrated from the source MDisk where
the extents of the specified volume are located to a defined target MDisk that must be part of
the same storage pool.
If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.
In this case, the extents that must be migrated are moved onto the set of MDisks that are not
being deleted. This statement is true if multiple MDisks are being removed from the storage
pool at the same time.
If a volume uses one or more extents that must be moved as a result of running the rmmdisk
command, the virtualization type for that volume is set to striped (if it was previously
sequential or image).
If the MDisk is operating in image mode, the MDisk changes to managed mode while the
extents are being migrated. Upon deletion, it changes to unmanaged mode.
Using the -force flag: If the -force flag is not used and if volumes occupy extents on one
or more of the MDisks that are specified, the command fails.
When the -force flag is used and if volumes occupy extents on one or more of the MDisks
that are specified, all extents on the MDisks are migrated to the other MDisks in the
storage pool if enough free extents exist in the storage pool. The deletion of the MDisks is
postponed until all extents are migrated, which can take time. If insufficient free extents
exist in the storage pool, the command fails.
Extents are allocated to the migrating volume from the set of MDisks in the target storage
pool by using the extent allocation algorithm.
The process can be prioritized by specifying the number of threads that are used in parallel
(1 - 4) while migrating; the use of only one thread puts the least background load on the
system.
The offline rules apply to both storage pools. Therefore, as shown in Figure 6-1, if any of the
M4, M5, M6, or M7 MDisks go offline, the V3 volume goes offline. If the M4 MDisk goes
offline, V3 and V5 go offline; however, V1, V2, V4, and V6 remain online.
If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.
During the move, the volume is listed as being a member of the original storage pool. For
configuration, the volume moves to the new storage pool instantaneously at the end of the
migration.
244 Implementing the IBM System Storage SAN Volume Controller V7.4
6.2.4 Migrating the volume to image mode
The facility to migrate a volume to an image mode volume can be combined with the
capability to migrate between storage pools. The source for the migration can be a managed
mode or an image mode volume. This combination of functions leads to the following
possibilities:
Migrate image mode to image mode within a storage pool.
Migrate managed mode to image mode within a storage pool.
Migrate image mode to image mode between storage pools.
Migrate managed mode to image mode between storage pools.
Regardless of the mode in which the volume starts, the volume is reported as being in
managed mode during the migration. Also, both of the MDisks that are involved are reported
as being in image mode during the migration. Upon completion of the command, the volume
is classified as an image mode volume.
NDVM supports access to a single volume by all nodes in the clustered system. This feature
adds the concept of access versus caching I/O Groups. Although the access to a volume is
unlimited to any node of the system, a single I/O Group still controls the I/O caching. This
dynamic balancing of the SVC workload is helpful in situations where the natural growth of
the environment’s I/O demands forces the client and storage administrators to expand
hardware resources. With NDVM, you can instantly rebalance the workload to the volumes to
the new set of SVC nodes (I/O Group) without needing to quiesce or interrupt application
operations and easily lower the high utilization of the original I/O Group.
Before you move the volumes to a new I/O Group on the SVC system, ensure that the
following prerequisites are met:
The host has access to the new I/O Group node ports through SAN zoning.
The host is assigned to the new I/O Group on the SVC system level.
The host operating system and multipathing software support the NDVM feature.
In this example, we want to move one of the AIX host volumes from its existing I/O Group to
the recently added pair of SVC nodes. To perform the NDVM by using the SVC GUI, complete
the following steps:
1. Verify that the host is assigned to the source and target I/O Groups. Select Hosts from the
left menu pane (Figure 6-2) and confirm the # of I/O Groups column.
2. Right-click the host and select Properties → Mapped Volumes. Verify the volumes and
caching I/O Group ownership, as shown in Figure 6-3.
246 Implementing the IBM System Storage SAN Volume Controller V7.4
3. Now, we move lpar01_vol3 from the existing SVC I/O Group 0 to the new I/O Group 1.
From the left menu pane, select Volumes to see all of the volumes and optionally, filter the
output for the results that you want, as shown in Figure 6-4.
4. Right-click volume lpar01_vol3, and in the menu, select Move Volume to a New I/O
Group.
5. The Move Volume to a New I/O Group wizard window starts (Figure 6-5). Click Next.
Figure 6-5 Move Volume to a New I/O Group wizard: Welcome window
6. Select I/O Group and Node → New Group (and optionally the preferred SVC node) or
leave Automatic for the default node assignment. Click Apply and Next, as shown in
Figure 6-6 on page 248.
You can see the progress of the task that is displayed in the task window and the SVC CLI
command sequence that is running the svctask movevdisk and svctask addvdiskaccess
commands.
7. The task completion window opens. Next, you need to detect the new paths by the
selected host to switch over the I/O processing to the new I/O Group. Perform the path
detection that is based on the operating system-specific procedures, as shown in
Figure 6-7. Click Apply and Next.
Figure 6-7 Move Volume to a New I/O Group wizard: Detect New Paths window
8. The SVC removes the old I/O Group access to a volume by calling the svctask
rmvdiskaccess CLI command. After the task completes, close the task window.
9. The confirmation with information about the I/O Group move is displayed on the Move
Volume to a New I/O Group wizard window. Proceed to the Summary by clicking Next.
248 Implementing the IBM System Storage SAN Volume Controller V7.4
10.Review the summary information and click Finish. The volume is successfully moved to a
new I/O Group without I/O disruption on the host side. To verify that volume is now being
cached by the new I/O Group, verify the Caching I/O Group column on the Volumes
submenu, as shown in Figure 6-8.
Note: For SVC code version 6.4 and higher, the CLI command svctask chvdisk is not
supported for migrating a volume between I/O Groups. Although svctask chvdisk still
modifies multiple properties of a volume, the new SVC CLI command movevdisk is used for
moving a volume between I/O Groups.
In certain conditions, you might still want to keep the volume accessible through multiple I/O
Groups. This function is possible, but only a single I/O Group can provide the caching of the
I/O to the volume. For modifying the access to a volume for more I/O Groups, use the SVC
CLI commands addvdiskaccess or rmvdiskaccess.
You can use the SVC GUI to modify more I/O Group access by selecting the volume,
right-clicking Properties → Edit, and then selecting the I/O Groups that you want by
selecting the Accessible I/O Groups property, as shown in Figure 6-9 on page 250.
Important: To change the caching I/O Group for a volume, use the movevdisk command.
To determine the extent allocation of MDisks and volumes, use the following commands:
To list the volume IDs and the corresponding number of extents that the volumes occupy
on the queried MDisk, use the following CLI command:
svcinfo lsmdiskextent <mdiskname | mdisk_id>
To list the MDisk IDs and the corresponding number of extents that the queried volumes
occupy on the listed MDisks, use the following CLI command:
svcinfo lsvdiskextent <vdiskname | vdisk_id>
To list the number of available free extents on an MDisk, use the following CLI command:
svcinfo lsfreeextents <mdiskname | mdisk_id>
Important: After a migration is started, the migration cannot be stopped. The migration
runs to completion unless it is stopped or suspended by an error condition, or if the volume
that is being migrated is deleted.
If you want the ability to start, suspend, or cancel a migration or control the rate of
migration, consider the use of the volume mirroring function or migrating volumes between
storage pools.
250 Implementing the IBM System Storage SAN Volume Controller V7.4
6.3 Functional overview of migration
This section describes a functional view of data migration.
6.3.1 Parallelism
You can perform several of the following activities in parallel.
Each system
An SVC system supports up to 32 active concurrent instances of members of the set of the
following migration tasks:
Migrate multiple extents
Migrate between storage pools
Migrate off a deleted MDisk
Migrate to image mode
The following high-level migration tasks operate by scheduling single extent migrations:
Up to 256 single extent migrations can run concurrently. This number is made up of single
extent migrations, which result from the operations previously listed.
The Migrate Multiple Extents and Migrate Between storage pools commands support a
flag with which you can specify the number of parallel “threads” to use (1 - 4). This
parameter affects the number of extents that are concurrently migrated for that migration
operation. Therefore, if the thread value is set to 4, up to four extents can be migrated
concurrently for that operation (subject to other resource constraints).
Each MDisk
The SVC supports up to four concurrent single extent migrations per MDisk. This limit does
not consider whether the MDisk is the source or the destination. If more than four single
extent migrations are scheduled for a particular MDisk, further migrations are queued,
pending the completion of one of the currently running migrations.
The migration is only suspended if any of the following conditions exist. Otherwise, the
migration is stopped:
The migration occurs between storage pools, and the migration progressed beyond the
first extent.
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves a volume that is spanning storage pools, which is not a valid
configuration other than during a migration.
The migration is a Migrate to Image Mode (even if it is processing the first extent).
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves the volume in an inconsistent state.
A migration is waiting for a metadata checkpoint that failed.
The SVC attempts to resume the migration if the error log entry is marked as fixed by using
the CLI or the GUI. If the error condition no longer exists, the migration proceeds. The
migration might resume on a node other than the node that started the migration.
Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MBs. In this
description, this unit is referred to as a chunk.
During the migration, the extent can be divided into the following regions, as shown in
Figure 6-10 on page 253:
Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the
virtualization layer that is waiting for the chunk to be copied.
Reads to Region A are directed to the destination because this data was copied. Writes to
Region A are written to the source and the destination extent to maintain the integrity of
the source extent.
Reads and writes to Region C are directed to the source because this region is not yet
migrated.
252 Implementing the IBM System Storage SAN Volume Controller V7.4
The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack, such as cache
destages, are held back. If the back-end storage is operating with significant latency, this
operation might take time (minutes) to complete, which can have an adverse affect on the
overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is
still active after 1 minute, the migration is paused for 30 seconds. During this time, writes to
the chunk can proceed. After 30 seconds, the migration of the chunk is resumed. This
algorithm is repeated as many times as necessary to complete the migration of the chunk, as
shown in Figure 6-10.
Not to scale
16 MiB
Figure 6-10 Migrating an extent
The SVC ensures read stability during data migrations, even if the data migration is stopped
by a node reset or a system shutdown. This read stability is possible because the SVC
disallows writes on all nodes to the area that is being copied. On a failure, the extent
migration is restarted from the beginning. At the conclusion of the operation, we see the
following results:
Extents were migrated in 16 MiB chunks, one chunk at a time.
Chunks are copied in progress or not copied.
When the extent is finished, its new location is saved.
Figure 6-11 shows the data migration and write operation relationship.
MDisk modes
The following MDisk modes are available:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and it has no metadata that is
stored on it. The SVC does not write to an MDisk that is in unmanaged mode except when
it attempts to change the mode of the MDisk to one of the other modes.
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume
with no virtualization. Image mode volumes have a minimum size of one block (512 bytes)
and always occupy at least one extent. An image mode MDisk is associated with exactly
one volume.
Managed mode MDisk
Managed mode MDisks contribute extents to the pool of available extents in the storage
pool. Zero or more managed mode volumes might use these extents.
254 Implementing the IBM System Storage SAN Volume Controller V7.4
Image mode to managed mode
This transition occurs when the image mode volume that is using the MDisk is migrated
into managed mode.
Managed mode to image mode is impossible
No operation is available to take an MDisk directly from managed mode to image mode.
You can achieve this transition by performing operations that convert the MDisk to
unmanaged mode and then to image mode.
add to group
Managed
Not in group
mode
remove from group
delete image
mode vdisk
create image
mode vdisk
Migrating to
Image mode
image mode start migrate to image mode
Image mode volumes have the special property that the last extent in the volume can be a
partial extent. Managed mode disks do not have this property.
To perform any type of migration activity on an image mode volume, the image mode disk first
must be converted into a managed mode disk. If the image mode disk has a partial last
extent, this last extent in the image mode volume must be the first extent to be migrated. This
migration is handled as a special case.
After this special migration operation occurs, the volume becomes a managed mode volume
and it is treated in the same way as any other managed mode volume. If the image mode disk
does not have a partial last extent, no special processing is performed. The image mode
volume is changed into a managed mode volume and it is treated in the same way as any
other managed mode volume.
After data is migrated off a partial extent, data cannot be migrated back onto the partial
extent.
Have one storage pool for all the image mode volumes and other storage pools for the
managed mode volumes, which use the migrate volume facility.
Be sure to verify that enough extents are available in the target storage pool.
256 Implementing the IBM System Storage SAN Volume Controller V7.4
You can use these methods individually or together to migrate your server’s LUNs from one
storage subsystem to another storage subsystem by using the SVC as your migration tool.
The only downtime that is required for these methods is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.
6.5.1 Windows Server 2008 host system connected directly to the DS 3400
In our example configuration, we use a Windows Server 2008 host and a DS 3400 storage
subsystem. The host has two LUNs (drives X and Y). The two LUNs are part of one DS 3400
array. Before the migration, LUN masking is defined in the DS 3400 to give access to the
Windows Server 2008 host system for the volumes from DS 3400 labeled X and Y
(Figure 6-14 on page 258).
W2k8
SAN
Figure 6-14 on page 258 shows the two LUNs (drive X and drive Y).
Figure 6-15 shows the properties of one of the DS 3400 disks that uses the Subsystem
Device Driver DSM (SDDDSM). The disk appears as an FAStT Multi-Path Disk Device.
258 Implementing the IBM System Storage SAN Volume Controller V7.4
6.5.2 Adding the SAN Volume Controller between the host system and the
DS 3400
Figure 6-16 shows the new environment with the SVC and a second storage subsystem that
is attached to the SAN. The second storage subsystem is not required to migrate to the SAN.
However, we show in the following examples that it is possible to move data across storage
subsystems without any host downtime.
W2k8
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
To add the SVC between the host system and the DS 3400 storage subsystem, complete the
following steps:
1. Check that you installed the supported device drivers on your host system.
2. Check that your SAN environment fulfills the supported zoning configurations.
3. Shut down the host.
4. Change the LUN masking in the DS 3400. Mask the LUNs to the SVC, and remove the
masking for the host.
Figure 6-17 on page 260 shows the two LUNs (win2008_lun_01 and win2008_lun_02)
with LUN IDs 2 and 3 that are remapped to the SVC Host ITSO_SVC_DH8.
Important: To avoid potential data loss, back up all the data that is stored on your
external storage before you use the wizard.
5. Log in to your SVC Console and open Pools → System Migration, as shown in
Figure 6-18.
6. Click Start New Migration, which starts a wizard, as shown in Figure 6-19.
7. Follow the Storage Migration Wizard, as shown in Figure 6-20 on page 261, and then click
Next.
260 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-20 Storage Migration Wizard (Step 1 of 8)
Figure 6-21 Storage Migration Wizard: Preparing the environment for migration (Step 2 of 8)
262 Implementing the IBM System Storage SAN Volume Controller V7.4
9. Click Next to complete the storage mapping, as shown in Figure 6-22.
12.Mark both MDisks (mdisk10 and mdisk12) for migrating, as shown in Figure 6-25, and
then click Next.
264 Implementing the IBM System Storage SAN Volume Controller V7.4
13.Figure 6-26 shows the MDisk import process. During the import process, a storage pool is
automatically created, in our case, MigrationPool_8192. You can see that the command
that is issued by the wizard creates an image mode volume with a one-to-one mapping to
mdisk10 and mdisk12. Click Close to continue.
14.To create a host object to which we map the volume later, click Add Host, as shown in
Figure 6-27.
16.Enter the host name that you want to use for the host, add the Fibre Channel (FC) port,
and select a host type. In our case, the host name is Win_2008. Click Add Host, as shown
in Figure 6-29 on page 267.
266 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-29 Storage Migration Wizard: Completed host information
Figure 6-32 Storage Migration Wizard: Volumes that are available for mapping (Step 6 of 8)
268 Implementing the IBM System Storage SAN Volume Controller V7.4
20.Mark both volumes and click Map to Host, as shown in Figure 6-33.
21.Modify the host mapping by choosing a host by using the drop-down menu, as shown in
Figure 6-34. Click Next.
22.The right side of Figure 6-35 on page 270 shows the volumes that can be marked to map
to your host. Mark both volumes and click Apply.
23.Figure 6-36 shows the progress of the volume mapping to the host. Click Close when you
are finished.
24.After the volume to host mapping task is completed, a host that is beneath the column
heading Host Mappings is shown as marked Yes (Figure 6-37 on page 271). Click Next.
270 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-37 Storage Migration Wizard: Map Volumes to Hosts
25.Select the storage pool that you want to use for migration, in our case, DS3400_pool1, as
shown in Figure 6-38. Click Next.
Figure 6-38 Storage Migration Wizard: Selecting a storage pool to use for migration (Step 7 of 8)
27.The window that is shown in Figure 6-40 opens. This window states that the migration has
begun. Click Finish.
28.The window that is shown in Figure 6-41 opens automatically to show the progress of the
migration.
272 Implementing the IBM System Storage SAN Volume Controller V7.4
29.Click Volumes → Volumes by host, as shown in Figure 6-42, to see all the volumes that
are served by the new host for this migration step.
30.Figure 6-43 shows all the volumes (copy0* and copy1) that are served by the newly
created host.
As you can see in Figure 6-43, the migrated volume is a mirrored volume with one copy on
the image mode pool and another copy in a managed mode storage pool. The administrator
can choose to leave the volume or split the initial copy from the mirror.
6.5.3 Importing the migrated disks into an online Windows Server 2008 host
To import the migrated disks into an online Windows Server 2008 Server host, complete the
following steps:
1. Start the Windows Server 2008 host system again, go to Disk Management of the
DS 3400 disks and see the new disk properties that changed to a 2145 Multi-Path Disk
Device, as shown in Figure 6-44 on page 274.
274 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-46 Subsystem Device Driver DSM CLI
3. Run the datapath query device command to check whether all paths are available as
planned in your SAN environment (Example 6-1).
Total Devices : 2
C:\Program Files\IBM\SDDDSM>
First, we add a new empty storage pool (in our case imagepool) for the import of the LUNs, as
shown in Example 6-3. It is better to have a separate pool in case a problem occurs during
the import. That way, the import process cannot affect the other storage pools.
276 Implementing the IBM System Storage SAN Volume Controller V7.4
ty:compression_uncompressed_capacity:parent_mdisk_grp_id:parent_mdisk_grp_name:chi
ld_mdisk_grp_count:child_mdisk_grp_capacity:type:encrypt
0:CompressedV7000:online:3:0:90.00GB:1024:90.00GB:0.00MB:0.00MB:0.00MB:0:80:auto:b
alanced:no:0.00MB:0.00MB:0.00MB:0:CompressedV7000:0:0.00MB:parent:no
1:test_pool_01:online:3:0:381.00GB:1024:381.00GB:0.00MB:0.00MB:0.00MB:0:80:off:ina
ctive:no:0.00MB:0.00MB:0.00MB:1:test_pool_01:0:0.00MB:parent:no
2:MigrationPool_8192:online:2:2:30.00GB:8192:0:30.00GB:30.00GB:30.00GB:100:0:auto:
balanced:no:0.00MB:0.00MB:0.00MB:2:MigrationPool_8192:0:0.00MB:parent:no
3:DS3400_pool1:online:1:1:100.00GB:1024:80.00GB:20.00GB:20.00GB:20.00GB:20:80:auto
:balanced:no:0.00MB:0.00MB:0.00MB:3:DS3400_pool1:0:0.00MB:parent:no
4:imagepool:online:0:0:0:256:0:0.00MB:0.00MB:0.00MB:0:0:off:inactive:no:0.00MB:0.0
0MB:0.00MB:4:imagepool:0:0.00MB:parent:
IBM_2145:ITSO_SVC2:ITSO_admin>
278 Implementing the IBM System Storage SAN Volume Controller V7.4
6.5.5 Migrating a volume from managed mode to image mode
Complete the following steps to migrate a managed volume to an image mode volume:
1. Create an empty storage pool for each volume that you want to migrate to image mode.
These storage pools host the target MDisk that you map later to your server at the end of
the migration.
2. Click Pools → MDisks by Pools to create a pool from the drop-down menu, as shown in
Figure 6-47.
3. To create an empty storage pool for migration, complete the following steps:
a. You are prompted for the pool name, extent size, and warning threshold, as shown in
Figure 6-48. After you enter the information, click Next.
b. You are then prompted to optionally select the MDisk to include in the storage pool, as
shown in Figure 6-49 on page 280. Click Create.
4. As shown in Figure 6-50, you are reminded that an empty storage pool was created. Click
Yes.
5. Figure 6-51 on page 281 shows the progress status as the system creates a storage pool
for migration. Click Close to continue.
280 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-51 Progress status
6. From the Create Volumes panel, select the volume that you want to migrate to image
mode and select Export to Image Mode from the drop-down menu, as shown in
Figure 6-52.
7. Select the MDisk onto which you want to migrate the volume, as shown in Figure 6-53 on
page 282. Click Next.
8. Select a storage pool into which the image mode volume is placed after the migration
completes, in our case, the For Migration storage pool. Click Finish, as shown in
Figure 6-54.
282 Implementing the IBM System Storage SAN Volume Controller V7.4
9. The volume is exported to image mode and placed in the For Migration storage pool, as
shown in Figure 6-55. Click Close.
10.Browse to Pools → MDisk by Pools. Click the plus sign (+) (expand icon) to the left of the
name. Now, mdisk12 is an image mode MDisk, as shown in Figure 6-56.
11.Repeat these steps for every volume that you want to migrate to an image mode volume.
12.Delete the image mode data from the SVC by using the procedure that is described in
6.5.7, “Removing image mode data from the IBM SAN Volume Controller” on page 291.
To migrate the image mode volume to another image mode volume, complete the following
steps:
1. Mark the unmanaged mdisk15 and click Actions or right-click and select Import from the
list, as shown in Figure 6-58.
2. The Import Wizard window opens, which describes the process of importing the MDisk
and mapping an image mode volume to it, as shown in Figure 6-59. Enable the caching
and click Next.
3. Select a temporary pool because you do not want to migrate the volume into an SVC
managed volume pool. Select the extent size from the drop-down menu and click Finish,
as shown in Figure 6-60 on page 285.
284 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-60 Import Wizard (Step 2 of 2)
4. The import process starts (as shown in Figure 6-61) by creating a temporary storage pool
MigrationPool_1024 (1 GiB) and an image volume. Click Close to continue.
Figure 6-61 Import of MDisk and creation of temporary storage pool MigrationPool_1024
5. As shown in Figure 6-62, an image mode mdisk15 now shows with the import controller
name and SCSI ID as its name.
6. Create a storage pool Migration_Out with the same extent size (1 GiB) as the
automatically created storage pool MigrationPool_1024 for transferring the image mode
disk. Go to Pools → MDisks by Pools, as shown in Figure 6-63 on page 286.
7. Click Create Pool to create an empty storage pool and give your new storage pool the
meaningful name Migration_Out. Click the Advanced Settings drop-down menu. Choose
1.00 GiB as the extent size for your new storage pool, as shown in Figure 6-64.Click Next
to continue.
Figure 6-64 Creating an empty storage pool with a 1 GiB extent size (Step 1 of 2)
8. Figure 6-65 on page 287 shows a storage pool window with several MDisks. Without
selecting an MDisk, click Create to continue to create an empty storage pool.
286 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-65 Creating an empty storage pool (Step 2 of 2)
9. The warning that is shown in Figure 6-66 reminds you that an empty storage pool is
created. Click Yes to continue.
10.Figure 6-67 on page 288 shows the progress of creating the storage pool Migration_Out.
Click Close to continue.
11.Now, the empty storage pool for the image to image migration is created. Go to Pools →
MDisks by Pools, as shown Figure 6-68.
288 Implementing the IBM System Storage SAN Volume Controller V7.4
13.In the left pane, select the storage pool of the imported disk, which is called
MigrationPool_1024. Then, mark the image disk that you want to migrate out and select
Actions. From the drop-down menu, select Export to Image Mode, as shown in
Figure 6-70.
14.Select the target MDisk mdisk13 on the new disk controller to which you want to migrate.
Click Next, as shown in Figure 6-71.
15.Select the target Migration_Out (empty) storage pool, as shown in Figure 6-72 on
page 290. Click Finish.
16.Figure 6-73 shows the progress status of the Export Volume to Image process. Click
Close to continue.
17.Figure 6-74 on page 291 shows that the MDisk location changed as expected to the new
storage pool Migration_Out.
290 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-74 Image disk migrated to new storage pool
18.Repeat these steps for all image mode volumes that you want to migrate.
19.If you want to delete the data from the SVC, use the procedure that is described in 6.5.7,
“Removing image mode data from the IBM SAN Volume Controller” on page 291.
6.5.7 Removing image mode data from the IBM SAN Volume Controller
If your data is in an image mode volume inside the SVC, you can remove the volume from the
SVC, which allows you to free the original LUN for reuse. The following sections describe how
to migrate data to an image mode volume. Depending on your environment, you might need
to complete the following procedures before you delete the image volume:
6.5.5, “Migrating a volume from managed mode to image mode” on page 279
6.5.6, “Migrating the volume from image mode to image mode” on page 283
To remove the image mode volume from the SVC, we use the delete vdisk command.
If the command succeeds on an image mode volume, the underlying back-end storage
controller is consistent with the data that a host might previously read from the image mode
volume. That is, all fast write data was flushed to the underlying LUN. Deleting an image
mode volume causes the MDisk that is associated with the volume to be ejected from the
storage pool. The mode of the MDisk is returned to unmanaged.
Image mode volumes only: This situation applies to image mode volumes only. If you
delete a normal volume, all of the data is also deleted.
As shown in Example 6-1 on page 275, the SAN disks are on the SVC.
Check that you installed the supported device drivers on your host system.
3. Check your Host and select your volume. Then, right-click and select Unmap all Hosts,
as shown in Figure 6-76.
4. Verify your unmap process, as shown in Figure 6-77, and click Unmap.
5. Repeat steps 3 - 5 for every image mode volume that you want to remove from the SVC.
292 Implementing the IBM System Storage SAN Volume Controller V7.4
6. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN
masking, and add the host to the masking.
7. Power on your host system.
6.5.8 Mapping the free disks onto the Windows Server 2008 server
To detect and map the disks that were freed from SVC management, go to Windows Server
2008 and complete the following steps:
1. Using your DS 3400 Storage Manager interface, remap the two LUNs that were MDisks
back to your Windows Server 2008 server.
2. Open your Device Manager window. Figure 6-78 shows that the LUNs are now back to an
IBM 1726-4xx FAStT Multi-Path Disk Device type.
3. Open your Disk Management window; the disks appeared, as shown in Figure 6-79 on
page 294. You might need to reactivate your disk by right-clicking each disk.
This example can help you to perform any of the following tasks in your environment:
Move a Linux server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC.
Perform this task first when you are introducing the SVC into your environment. This
section shows that your host downtime is only a few minutes while you remap and remask
disks by using your storage subsystem LUN management tool. For more information
about this task, see 6.6.2, “Preparing your IBM SAN Volume Controller to virtualize disks”
on page 297.
Move data between storage subsystems while your Linux server is still running and
servicing your business application.
Perform this task if you are removing a storage subsystem from your SAN environment.
You also can perform this task if you want to move the data onto LUNs that are more
appropriate for the type of data that is stored on those LUNs, taking availability,
performance, and redundancy into consideration. For more information about this task,
see 6.6.4, “Migrating the image mode volumes to managed MDisks” on page 304.
Move your Linux server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the Linux server.
For more information about this step, see 6.6.5, “Preparing to migrate from the IBM SAN
Volume Controller” on page 307.
294 Implementing the IBM System Storage SAN Volume Controller V7.4
You can use these three activities individually or together to migrate your Linux server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three tasks, you can introduce or remove the SVC from
your environment.
The only downtime that is required for these tasks is the time that it takes to remask and
remap the LUNs between the storage subsystems and your SVC.
LINUX
Host
SAN
Green Zone
IBM or OEM
Storage
Subsystem
Figure 6-80 shows our Linux server that is connected to our SAN infrastructure. The following
LUNs are masked directly to our Linux server from our storage subsystem:
The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise
Linux V5.1). This LUN is used to boot the system directly from the storage subsystem. The
operating system identifies this LUN as /dev/mapper/VolGroup00-LogVol00.
SCSI LUN ID 0: To successfully boot a host off the SAN, you must assign the LUN as
SCSI LUN ID 0.
Example 6-11 on page 296 shows our disks that attach directly to the Linux hosts.
Our Linux server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-80 on page 295. The
Linux server has the following configuration:
The Linux server’s host bus adapter (HBA) cards are zoned so that they are in the Green
Zone with our storage subsystem.
The two LUNs that were defined on the storage subsystem by using LUN masking are
directly available to our Linux server.
6.6.1 Connecting the IBM SAN Volume Controller to your SAN fabric
This section describes the steps to introduce the SVC into your SAN environment. Although
this section summarizes these activities only, you can introduce the SVC into your SAN
environment without any downtime to any host or application that also uses your SAN.
If an SVC is already connected, skip to 6.6.2, “Preparing your IBM SAN Volume Controller to
virtualize disks” on page 297.
Complete the following steps to connect the SVC to your SAN fabric:
1. Assemble your SVC components (nodes, uninterruptible power supply units, and
redundant ac-power switches). Cable the SVC correctly, power on the SVC, and verify that
the SVC is visible on your SAN. For more information, see Chapter 3, “Planning and
configuration” on page 73.
2. Create and configure your SVC system.
3. Create the following zones:
– An SVC node zone, which is our Black Zone that is described in Figure 6-81 on
page 297
– A storage zone (our Red Zone)
– A host zone (our Blue Zone)
296 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-81 shows our environment.
LINUX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
We must create an empty storage pool for each of the disks by using the commands that are
shown in Example 6-12.
The use of the svcinfo lshbaportcandidate command on the SVC lists all of the worldwide
names (WWNs), which are not yet allocated to a host, that the SVC can see on the SAN
fabric. Example 6-13 shows the output of the nodes that it found on our SAN fabric. (If the
port did not show up, a zone configuration problem exists.)
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2
IBM_2145:ITSO-CLS1:ITSO_admin>
If you do not know the WWN of your Linux server, you can review which WWNs are currently
configured on your storage subsystem for this host. Figure 6-82 shows our configured ports
on an IBM DS4700 storage subsystem.
298 Implementing the IBM System Storage SAN Volume Controller V7.4
After it is verified that the SVC can see our host (Palau), we create the host entry and assign
the WWN to this entry. Example 6-14 shows these commands.
You can rename the storage subsystem to a more meaningful name by using the svctask
chcontroller -name command. If you have multiple storage subsystems that connect to your
SAN fabric, renaming the storage subsystems makes it considerably easier to identify them.
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are
shown in Figure 6-83 on page 300 (which shows the disk serial number SAN_Boot_palau)
and Figure 6-84 on page 300.
300 Implementing the IBM System Storage SAN Volume Controller V7.4
Before we move the LUNs to the SVC, we must configure the host multipath configuration for
the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-16, and
then add the content of Example 6-17 to the file.
# SVC
device {
vendor "IBM"
product "2145DH8"
path_grouping_policy group_by_serial
}
We are now ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as volumes.
If we wanted to move only the LUN that holds our application and data files, we do not have to
reboot the host. The only requirement is that we unmount the file system and vary off the
volume group (VG) to ensure the data integrity between the reassignment.
Because we intend to move both LUNs at the same time, we must complete the following
steps:
1. Confirm that the multipath.conf file is configured for the SVC.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps instead:
a. Stop the applications that use the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are a Logical Volume Manager (LVM) volume, deactivate that VG by
using the vgchange -a n VOLUMEGROUP_NAME command.
d. If possible, also unload your HBA driver by using the rmmod DRIVER_MODULE command.
This command removes the SCSI definitions from the kernel. (We reload this module
and rediscover the disks later.) It is possible to tell the Linux SCSI subsystem to rescan
for new disks without requiring you to unload the HBA driver; however, we do not
provide those details here.
LUN IDs: Although we are using boot from SAN, you can also map the boot disk with
any LUN to the SVC. The LUN does not have to be 0 until later when we configure the
mapping in the SVC to the host.
4. From the SVC, discover the new disks by using the svctask detectmdisk command. The
disks are discovered and named mdiskN, where N is the next available MDisk number
(starting from 0). Example 6-18 shows the commands that we used to discover our
MDisks and to verify that we have the correct MDisks.
Important: Match your discovered MDisk serial numbers (unique identifier (UID) on the
svcinfo lsmdisk task display) with the serial number that you recorded earlier (in
Figure 6-83 on page 300 and Figure 6-84 on page 300).
5. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-19.
6. We create our image mode volumes by using the svctask mkvdisk command and the
-vtype image option, as shown in Example 6-20 on page 303. This command virtualizes
the disks in the same layout as though they were not virtualized.
302 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 6-20 Create the image mode volumes
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdisk -mdiskgrp Palau_Pool1 -iogrp 0
-vtype image -mdisk md_palauS -name palau_SANB
Virtual Disk, id [29], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdisk -mdiskgrp Palau_Pool2 -iogrp 0
-vtype image -mdisk md_palauD -name palau_Data
Virtual Disk, id [30], successfully create
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 md_palauS online image 2 Palau_Pool1 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online image 3 Palau_Pool2 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_wri te_state se_copy_count
29 palau_SANB 0 io_grp0 online 4
Palau_Pool1 12.0GB image
60050768018301BF280000000000002B 0 1 empty
0
30 palau_Data 0 io_grp0 online 4
Palau_Pool2 5.0GB image
60050768018301BF280000000000002C 0 1 empty
0
7. Map the new image mode volumes to the host, as shown in Example 6-21.
Important: Ensure that you map the boot volume with SCSI ID 0 to your host. The host
must identify the boot volume during the boot process.
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.
8. Power on your host server and enter your FC HBA BIOS before booting the operating
system. Ensure that you change the boot configuration so that it points to the SVC.
Complete the following steps on a QLogic HBA:
a. Press Ctrl+Q to enter the HBA BIOS.
b. Open Configuration Settings.
c. Open Selectable Boot Settings.
d. Change the entry from your storage subsystem to the IBM SAN Volume Controller
2145 LUN with SCSI ID 0.
e. Exit the menu and save your changes.
9. Boot up your Linux operating system.
If you moved only the application LUN to the SVC and left your Linux server running, you
must complete only these steps to see the new volume:
a. Load your HBA driver by using the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, you can run commands to the kernel to rescan the
SCSI bus to see the new volumes. (These details are beyond the scope of this book.)
b. Check your syslog to verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, rediscover the VG and then run the
vgchange -a y VOLUME_GROUP command to activate the VG.
10.Mount your file systems by using the mount /MOUNT_POINT command, as shown in
Example 6-22. The df output shows us that all of the disks are available again.
304 Implementing the IBM System Storage SAN Volume Controller V7.4
Preparing MDisks for striped mode volumes
From our second storage subsystem, we performed the following tasks:
Created and allocated three new LUNs to the SVC
Discovered them as MDisks
Renamed these LUNs to more meaningful names
Created a storage pool
Placed all of these MDisks into this storage pool
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-24. Listing the storage pool by using the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pools is slowly increasing while those extents
are moved to the new storage pool.
After this task completes, the volumes are now spread over three MDisks, as shown in
Example 6-25.
306 Implementing the IBM System Storage SAN Volume Controller V7.4
used_capacity 17.00GB
real_capacity 17.00GB
overallocation 70
warning 0
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdiskmember palau_SANB
id
28
29
30
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdiskmember palau_Data
id
28
29
30
IBM_2145:ITSO-CLS1:ITSO_admin>
Our migration to striped volumes on another storage subsystem (DS4500) is now complete.
The original MDisks (palau-md1, palau-md2, and palau-md3) can now be removed from the
SVC, and these LUNs can be removed from the storage subsystem.
If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can
remove the DS4700 storage subsystem from our SAN fabric.
You might want to perform this task for any one of the following reasons:
You purchased a new storage subsystem and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
Changes to your environment no longer require this host to use the SVC.
We can perform other preparation tasks before we must shut down the host and reconfigure
the LUN masking and mapping. We describe these tasks in this section.
If you are moving the data to a new storage subsystem, it is assumed that the storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-85 on
page 308.
LINUX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
We also need a Green Zone for our host to use when we are ready for it to directly access the
disk after it is removed from the SVC.
It is assumed that you created the necessary zones, and after your zone configuration is set
up correctly, the SVC sees the new storage subsystem controller by using the svcinfo
lscontroller command, as shown in Example 6-26.
308 Implementing the IBM System Storage SAN Volume Controller V7.4
It is also a good idea to rename the new storage subsystem’s controller to a more useful
name, which can be done by using the svctask chcontroller -name command, as shown in
Example 6-27.
Also, verify that controller name was changed as you wanted, as shown in Example 6-28.
Creating LUNs
We created two LUNs and masked the LUNs on our storage subsystem so that the SVC can
see them. Eventually, we give these two LUNs directly to the host and remove the volumes
that the host features. To check that the SVC can use these two LUNs, run the svctask
detectmdisk command, as shown in Example 6-29.
Even though the MDisks do not stay in the SVC for long, we suggest that you rename them to
more meaningful names so that they are not confused with other MDisks that are used by
other activities.
Our SVC environment is now ready for the volume migration to image mode volumes.
310 Implementing the IBM System Storage SAN Volume Controller V7.4
migrate_type Migrate_to_Image
progress 4
migrate_source_vdisk_index 29
migrate_target_mdisk_index 32
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 30
migrate_source_vdisk_index 30
migrate_target_mdisk_index 31
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin>
During the migration, our Linux server is unaware that its data is being physically moved
between storage subsystems.
After the migration completes, the image mode volumes are ready to be removed from the
Linux server. Also, the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.
6.6.7 Removing the LUNs from the IBM SAN Volume Controller
The next step requires downtime on the Linux server because we remap and remask the
disks so that the host sees them directly through the Green Zone, as shown in Figure 6-85 on
page 308.
Our Linux server has two LUNs: one LUN is our boot disk and holds operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.
If we want to move only the LUN that holds our application and data files, we can move that
LUN without rebooting the host. The only requirement is that we unmount the file system and
vary off the VG to ensure data integrity during the reassignment.
Before you start: Moving LUNs to another storage subsystem might need another entry in
the multipath.conf file. Check with the storage subsystem vendor to identify any content
that you must add to the file. You might be able to install and modify the file in advance.
Complete the following steps to move both LUNs at the same time:
1. Confirm that your operating system is configured for the new storage.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
4. Remove the volumes from the SVC by using the svctask rmvdisk command. This step
makes them unmanaged, as shown in Example 6-33.
Cached data: When you run the svctask rmvdisk command, the SVC first confirms
that no outstanding dirty cached data exists for the volume that is being removed. If
cached data is still uncommitted, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data needs to be destaged and how busy the
I/O subsystem is determine how long this command takes to complete.
You can check whether the volume has uncommitted data in the cache by using the
command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute.
This attribute has the following meanings:
empty: No modified data exists in the cache.
not_empty: Modified data might exist in the cache.
corrupt: Modified data might exist in the cache, but any data was lost.
312 Implementing the IBM System Storage SAN Volume Controller V7.4
32 mdpalau_ivd online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>
5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the Linux server.
Important: If one of the disks is used to boot your Linux server, you must ensure that
the disk is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds
that disk during its initialization.
6. Power on your host server and enter your FC HBA BIOS before you boot the OS. Ensure
that you change the boot configuration so that it points to the SVC. In our example, we
performed the following steps on a QLogic HBA:
a. Pressed Ctrl+Q to enter the HBA BIOS
b. Opened Configuration Settings
c. Opened Selectable Boot Settings
d. Changed the entry from the SVC to the storage subsystem LUN with SCSI ID 0
e. Exited the menu and saved the changes
Important: This step is the last step that you can perform and still safely back out from
the changes so far.
Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the svctask detectmdisk command to rediscover the MDisks.
Re-create the volumes with the svctask mkvdisk command.
Remap the volumes back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
We then manage those LUNs with the SVC, move them between other managed disks, and
finally move them back to image mode disks so that those LUNs can then be masked and
mapped back to the VMware ESX server directly.
This example can help you perform any one of the following tasks in your environment:
Move your ESX server’s data LUNs (that are your VMware VMFS file systems where you
might have your VMs stored), which are directly accessed from a storage subsystem, to
virtualized disks under the control of the SVC.
Move LUNs between storage subsystems while your VMware VMs are still running.
You can perform this task to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. For more information, see 6.7.4, “Migrating the image mode volumes” on
page 323.
Move your VMware ESX server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the server.
This task starts in 6.7.5, “Preparing to migrate from the IBM SAN Volume Controller” on
page 326.
You can use these tasks individually or together to migrate your VMware ESX server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three of these tasks, you can introduce the SVC in your
environment or move the data between your storage subsystems.
The only downtime that is required for these tasks is the time that it takes you to remask and
remap the LUNs between the storage subsystems and your SVC.
314 Implementing the IBM System Storage SAN Volume Controller V7.4
Our starting SAN environment is shown in Figure 6-86.
SLES W2k3
SAN
Green Zone
IBM or OEM
Storage
Subsystem
Figure 6-86 shows our ESX server that is connected to the SAN infrastructure. Two LUNs are
masked directly to it from our storage subsystem.
Our ESX server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-86.
The ESX server’s HBA cards are zoned so that they are in the Green Zone with our storage
subsystem.
The two LUNs that were defined on the storage subsystem and that use LUN masking are
directly available to our ESX server.
6.7.1 Connecting the IBM SAN Volume Controller to your SAN fabric
This section describes the process that is used to introduce the SVC into your SAN
environment. Although we summarize only the steps that are in the process, you can
introduce the SVC into your SAN environment without any downtime to any host or
application that also uses your SAN.
If an SVC is already connected, skip to the instructions that are given in 6.7.2, “Preparing your
IBM SAN Volume Controller to virtualize disks” on page 316.
Complete the following steps to connect the SVC to your SAN fabric:
1. Assemble your SVC components (nodes, uninterruptible power supply unit, and redundant
ac-power switches). Cable the SVC correctly and power on the SVC. Verify that the SVC is
visible on your SAN.
2. Create and configure your SVC system.
3. Create the following zones:
– An SVC node zone (the Black Zone as shown in our diagram on Example 6-57 on
page 337)
– A storage zone (our Red Zone)
– A host zone (our Blue Zone)
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
316 Implementing the IBM System Storage SAN Volume Controller V7.4
Creating a storage pool
When we move the two ESX LUNs to the SVC, they first are used in image mode; therefore,
we need a storage pool to hold those disks.
We create an empty storage pool for these disks by using the command that is shown in
Example 6-35. Our MDG_Nile_VM storage pool holds the boot LUN and our data LUN.
First, we get the WWN for our ESX server’s HBA because many hosts are connected to our
SAN fabric and in the Blue Zone. We want to ensure that we have the correct WWN to reduce
our ESX server’s downtime.
Log in to your VMware Management Console as root, browse to Configuration, and select
Storage Adapters. The storage adapters are shown on the right side of the window that is
shown in Figure 6-88. This window displays all of the necessary information. Figure 6-88
shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.
Figure 6-88 Obtain your WWN by using the VMware Management Console
Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs that are
not yet allocated to a host and that the SVC can see on the SAN fabric. Example 6-36 on
page 318 shows the output of the host WWNs that it found on our SAN fabric. (If the port is
not shown, a zone configuration problem exists.)
After we verify that the SVC can see our host, we create the host entry and assign the WWN
to this entry, as shown in Example 6-37.
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
318 Implementing the IBM System Storage SAN Volume Controller V7.4
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Figure 6-89 and
Figure 6-90 show our serial numbers. Figure 6-89 shows disk serial number VM_W2k3.
We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and
give them back to the host as volumes.
The VMs are on these LUNs. Therefore, to move these LUNs under the control of the SVC,
we do not need to reboot the entire ESX server. However, we must stop and suspend all
VMware guests that are using these LUNs.
2. Identify all of the VMware guests that are using this LUN and shut them down. One way to
identify them is to highlight the VM and open the Summary tab. The datapool that is used
is displayed under Datastore. Figure 6-93 on page 321 shows a Linux VM that is using the
datastore that is named SLES_Costa_Rica.
320 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-93 Identify the LUNs that are used by the VMs
3. If you have several ESX hosts, also check the other ESX hosts to ensure that no guest
operating system is running and using this datastore.
4. Repeat steps 1 - 3 for every datastore that you want to migrate.
5. After the guests are suspended, we use Storage Manager (our storage subsystem
management tool) to unmap and unmask the disks from the ESX server and to remap and
remask the disks to the SVC.
6. From the SVC, discover the new disks by using the svctask detectmdisk command. The
disks are discovered and named as mdiskN, where N is the next available MDisk number
(starting from 0). Example 6-39 shows the commands that we used to discover our
MDisks and to verify that we have the correct MDisks.
7. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-40.
8. We create our image mode volumes by using the svctask mkvdisk command
(Example 6-41). The use of the -vtype image parameter ensures that it creates image
mode volumes, which means that the virtualized disks have the same layout as though
they were not virtualized.
9. We can map the new image mode volumes to the host. Use the same SCSI LUN IDs as
on the storage subsystem for the mapping, as shown in Example 6-42.
322 Implementing the IBM System Storage SAN Volume Controller V7.4
10.By using the VMware Management Console, rescan to discover the new volume. Open
the Configuration tab, select Storage Adapters, and then click Rescan. During the
rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with the new vmhba devices.
11.We are ready to restart the VMware guests again.
At this point, you migrated the VMware LUNs successfully to the SVC.
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
324 Implementing the IBM System Storage SAN Volume Controller V7.4
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-44. Listing the storage pool with the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pool is slowly increasing as those extents are
moved to the new storage pool.
If you compare the svcinfo lsmdiskgrp output after the migration (as shown in
Example 6-45), you can see that all of the virtual capacity was moved from the old storage
pool (MDG_Nile_VM) to the new storage pool (MDG_ESX_VD). The mdisk_count column
shows that the capacity is now spread over three MDisks.
The migration to the SVC is complete. You can remove the original MDisks from the SVC and
remove these LUNs from the storage subsystem.
If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it
from our SAN fabric.
You might want to perform this process for any one of the following reasons:
You purchased a new storage subsystem and you were using the SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
Changes to your environment no longer require this host to use the SVC.
We can perform other preparatory activities before we shut down the host and reconfigure the
LUN masking and mapping. This section describes those activities. In our example, we move
volumes that are on a DS4500 to image mode volumes that are on a DS4700.
326 Implementing the IBM System Storage SAN Volume Controller V7.4
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as described in “Adding a storage
subsystem to the IBM SAN Volume Controller” on page 323 and “Make fabric zone changes”
on page 323.
Creating LUNs
On our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see
them. These two LUNs eventually are given directly to the host, which removes the volumes
that it uses. To check that the SVC can use them, run the svctask detectmdisk command, as
shown in Example 6-46.
Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are being used by
other activities. We also create the storage pools to hold our new MDisks, as shown in
Example 6-47.
Our SVC environment is ready for the volume migration to image mode volumes.
During the migration, our ESX server is unaware that its data is being physically moved
between storage subsystems. We can continue to run and use the VMs that are running on
the server.
You can check the migration status by using the svcinfo lsmigrate command, as shown in
Example 6-49.
328 Implementing the IBM System Storage SAN Volume Controller V7.4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 12
migrate_source_vdisk_index 30
migrate_target_mdisk_index 26
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin>
After the migration completes, the image mode volumes are ready to be removed from the
ESX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.
6.7.7 Removing the LUNs from the IBM SAN Volume Controller
Your ESX server’s configuration determines in what order your LUNs are removed from the
control of the SVC, and whether you must reboot the ESX server and suspend the VMware
guests.
In our example, we moved the VM disks. Therefore, to remove these LUNs from the control of
the SVC, we must stop and suspend all of the VMware guests that are using this LUN.
Complete the following steps:
1. Check which SCSI LUN IDs are assigned to the migrated disks by using the svcinfo
lshostvdiskmap command, as shown in Example 6-50. Compare the volume UID and sort
out the information.
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count
0 vdisk_A 0 io_grp0 online
2 MDG_Image 36.0GB image
29 ESX_W2k3_IVD 0 io_grp0 online
4 MDG_ESX_VD 70.0GB striped
60050768018301BF2800000000000029 0 1
30 ESX_SLES_IVD 0 io_grp0 online
4 MDG_ESX_VD 60.0GB striped
60050768018301BF280000000000002A 0 1
IBM_2145:ITSO-CLS1:ITSO_admin>
4. Remove the volumes from the SVC by using the svctask rmvdisk command, which
makes the MDisks unmanaged, as shown in Example 6-52.
Cached data: When you run the svctask rmvdisk command, the SVC first confirms
that there is no outstanding dirty cached data for the volume that is being removed. If
uncommitted cached data still exists, the command fails with the following error
message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data exists to destage and how busy the I/O
subsystem is determine how long this command takes to complete.
You can check whether the volume has uncommitted data in the cache by using the
svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute.
This attribute has the following meanings:
empty: No modified data exists in the cache.
not_empty: Modified data might exist in the cache.
corrupt: Modified data might exist in the cache, but the data was lost.
330 Implementing the IBM System Storage SAN Volume Controller V7.4
5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the ESX server. Remember in Example 6-50 on
page 329, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem,
use the same SCSI LUN IDs that you used in the SVC.
Important: This step is the last step that you can perform and still safely back out of
any changes made so far.
Up to this point, you can reverse all of the following actions that you performed to get
the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the svctask detectmdisk command to rediscover the MDisks.
Re-create the volumes with the svctask mkvdisk command.
Remap the volumes back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
6. By using the VMware Management Console, rescan to discover the new volume.
Figure 6-95 shows the view before the rescan. Figure 6-96 on page 332 shows the view
after the rescan. The size of the LUN changed because we moved to another LUN on
another storage subsystem.
During the rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with a new vmhba address and VMware recognizes it
as our VMWARE-GUESTS disk.
We are now ready to restart the VMware guests.
7. To ensure that the MDisks are removed from the SVC, run the svctask detectmdisk
command. The MDisks are discovered as offline and then automatically removed when
the SVC determines that no volumes are associated with these MDisks.
We manage those LUNs with the SVC, move them between other managed disks, and then
move them back to image mode disks so that those LUNs can then be masked and mapped
back to the AIX server directly.
By using this example, you can perform any of the following tasks in your environment:
Move an AIX server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC, which is the first task that you perform when you are introducing
the SVC into your environment.
This section shows that your host downtime is only a few minutes while you remap and
remask disks by using your storage subsystem LUN management tool. This step starts in
6.8.2, “Preparing your IBM SAN Volume Controller to virtualize disks” on page 335.
332 Implementing the IBM System Storage SAN Volume Controller V7.4
Move data between storage subsystems while your AIX server is still running and
servicing your business application.
You can perform this task if you are removing a storage subsystem from your SAN
environment and you want to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. This step is described in 6.8.4, “Migrating image mode volumes to volumes”
on page 342.
Move your AIX server’s LUNs back to image mode volumes so that they can be remapped
and remasked directly back to the AIX server.
This step starts in 6.8.5, “Preparing to migrate from the IBM SAN Volume Controller” on
page 344.
Use these tasks individually or together to migrate your AIX server’s LUNs from one storage
subsystem to another storage subsystem by using the SVC as your migration tool. If you do
not use all three tasks, you can introduce or remove the SVC from your environment.
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.
AIX
Host
SAN
Green Zone
IBM or OEM
Storage
Subsystem
Figure 6-97 also shows that our AIX server is connected to our SAN infrastructure. It has two
LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.
The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the
itsoaixvg1 LVM group, as shown in Example 6-53 on page 334.
Our AIX server represents a typical SAN environment with a host directly by using LUNs that
were created on a SAN storage subsystem, as shown in Figure 6-97 on page 333.
The AIX server’s HBA cards are zoned so that they are in the Green (dotted line) Zone with
our storage subsystem.
The two LUNs, hdisk3 and hdisk4, were defined on the storage subsystem. By using LUN
masking, they are directly available to our AIX server.
6.8.1 Connecting the IBM SAN Volume Controller to your SAN fabric
This section describes the steps to take to introduce the SVC into your SAN environment.
Although this section summarizes only these activities, you can accomplish this task without
any downtime to any host or application that also uses your SAN.
If an SVC is already connected, skip to 6.8.2, “Preparing your IBM SAN Volume Controller to
virtualize disks” on page 335.
Important: Be careful when you are connecting the SVC to your SAN because this action
requires you to connect cables to your SAN switches and alter your switch zone
configuration. Performing these tasks incorrectly can render your SAN inoperable, so
ensure that you fully understand the effect of your actions.
Complete the following tasks to connect the SVC to your SAN fabric:
1. Assemble your SVC components (nodes, uninterruptible power supply unit, and redundant
ac-power switches), cable the SVC correctly, power on the SVC, and verify that the SVC is
visible on your SAN.
2. Create and configure your SVC system.
3. Create the following zones:
– An SVC node zone (our Black Zone, as shown in Example 6-66 on page 344)
– A storage zone (our Red Zone)
– A host zone (our Blue Zone)
334 Implementing the IBM System Storage SAN Volume Controller V7.4
Zoning for Migration Scenarios
AIX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
7 aix_imgmdg online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
IBM_2145:ITSO-CLS2:ITSO_admin>
First, we get the WWN for our AIX server’s HBA because we have many hosts that are
connected to our SAN fabric and in the Blue Zone. We want to ensure that we have the
correct WWN to reduce our AIX server’s downtime. Example 6-55 shows the commands to
get the WWN; our host has a WWN of 10000000C932A7FB.
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1
#lscfg -vpl fcs1
fcs1 U0.1-P2-I5/Q1 FC Adapter
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A67B
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
336 Implementing the IBM System Storage SAN Volume Controller V7.4
Network Address.............10000000C932A800
ROS Level and ID............02C03891
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C932A800
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1
Device Specific.(YL)........U0.1-P2-I5/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1
##
The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which were
not yet allocated to a host, that the SVC can see on the SAN fabric. Example 6-56 shows the
output of the host WWNs that it found in our SAN fabric. (If the port is not shown, a zone
configuration problem exists.)
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lshbaportcandidate
id
10000000C932A7FB
10000000C932A800
210000E08B89B8C0
IBM_2145:ITSO-CLS2:ITSO_admin>
After we verify that the SVC can see our host (Kanaga), we create the host entry and assign
the WWN to this entry, as shown with the commands in Example 6-57.
Names: The svctask chcontroller command enables you to change the discovered
storage subsystem name in the SVC. In complex SANs, we suggest that you rename your
storage subsystem to a more meaningful name.
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Figure 6-99 on
page 339 shows disk serial number kanage_lun0.
338 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-99 Obtaining disk serial number kanage_lun0
We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and
give them back to the host as volumes.
Because we want to move only the LUN that holds our application and data files, we move
that LUN without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.
Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver
(SDD) is installed on the AIX server. You can install the SDD in advance; however, it might
require an outage of your host to install the SDD in advance.
Complete the following steps to move both LUNs at the same time:
1. Confirm that the SDD is installed.
2. Complete the following steps to unmount and vary off the VGs:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the varyoffvg
VOLUMEGROUP_NAME command.
Example 6-59 shows the commands that were run on Kanaga.
3. By using Storage Manager (our storage subsystem management tool), the disks can be
unmapped and unmasked from the AIX server and remapped and remasked as disks of
the SVC.
4. From the SVC, discover the new disks by using the svctask detectmdisk command. The
disks are discovered and named mdiskN, where N is the next available MDisk number
(starting from 0). Example 6-60 shows the commands that were used to discover our
MDisks and to verify that the correct MDisks are available.
340 Implementing the IBM System Storage SAN Volume Controller V7.4
25 mdisk25 online unmanaged
8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>
Important: Match your discovered MDisk serial numbers (the UID on the svcinfo
lsmdisk command task display) with the serial number that you discovered earlier, as
shown in Figure 6-99 on page 339 and Figure 6-100 on page 339).
5. After you verify that the correct MDisks are available, rename them to avoid confusion in
the future when you perform other MDisk-related tasks, as shown in Example 6-61.
6. Create the image mode volumes by using the svctask mkvdisk command and the option
-vtype image, as shown in Example 6-62. This command virtualizes the disks in the same
layout as though they were not virtualized.
7. Map the new image mode volumes to the host, as shown in Example 6-63.
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.
342 Implementing the IBM System Storage SAN Volume Controller V7.4
IBM_2145:ITSO-CLS2:ITSO_admin>svctask addmdisk -mdisk aix_vd2 aix_vd
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>
While the migration is running, our AIX server is still running and we can continue accessing
the files.
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-65. Listing the storage pool by using the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pool is slowly increasing while those extents
are moved to the new storage pool.
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 10
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 9
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS2:ITSO_admin>
Our migration to the SVC is complete. You can remove the original MDisks from the SVC and
you can remove these LUNs from the storage subsystem.
If these LUNs are the LUNs that were used last on our storage subsystem, we can remove
these LUNs from our SAN fabric.
You can perform this task for one of the following reasons:
You purchased a new storage subsystem and you were using the SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a volume onto another volume and you no
longer need that host that is connected to the SVC.
344 Implementing the IBM System Storage SAN Volume Controller V7.4
You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
Changes to your environment no longer require this host to use the SVC.
Other preparatory tasks need to be performed before we shut down the host and reconfigure
the LUN masking and mapping. This section describes those tasks.
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as shown in Figure 6-101.
LINUX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
Create a Green Zone for our host to use when we are ready for it to access the disk directly
after it is removed from the SVC. (It is assumed that you created the necessary zones.)
After your zone configuration is set up correctly, the SVC sees the new storage subsystem’s
controller by using the svcinfo lscontroller command, as shown in Example 6-67 on
page 346. It is also useful to rename the controller to a more meaningful name by using the
svctask chcontroller -name command.
Creating LUNs
On our storage subsystem, we created two LUNs and masked them so that the SVC can see
them. We eventually give these LUNs directly to the host and remove the volumes that the
host is using. To check that the SVC can use the LUNs, run the svctask detectmdisk
command, as shown in Example 6-68.
In our example, we use two 10 GB LUNs that are on the DS4500 subsystem. Therefore, in
this step, we migrate back to image mode volumes and to another subsystem in one step. We
deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they
appear offline here.
Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are used by other
activities. Also, we create the storage pools to hold our new MDisks, as shown in
Example 6-69 on page 347.
346 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 6-69 Rename the MDisks
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name AIX_MIG mdisk29
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name AIX_MIG1 mdisk30
IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512
MDisk Group, id [3], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 KANAGA_AIXMIG online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
6 aix_vd online 3 2
18.0GB 512 5.0GB 13.00GB 13.00GB
13.00GB 72 0
7 aix_imgmdg offline 2 0
13.0GB 512 13.0GB 0.00MB 0.00MB
0.00MB 0 0
IBM_2145:ITSO-CLS2:ITSO_admin>
Now, our SVC environment is ready for the volume migration to image mode volumes.
During the migration, our AIX server is unaware that its data is being moved physically
between storage subsystems.
After the migration is complete, the image mode volumes are ready to be removed from the
AIX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystems tool.
6.8.7 Removing the LUNs from the IBM SAN Volume Controller
The next step requires downtime while we remap and remask the disks so that the host sees
them directly through the Green Zone.
Because our LUNs hold data files only and we use a unique VG, we can remap and remask
the disks without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.
Before you start: Moving LUNs to another storage system might need a driver other than
SDD. Check with the storage subsystems vendor to see which driver you need. You might
be able to install this driver in advance.
348 Implementing the IBM System Storage SAN Volume Controller V7.4
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command, as
shown in Example 6-71. To confirm that you removed the volumes, use the svcinfo
lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX
server.
4. Remove the volumes from the SVC by using the svctask rmvdisk command, which
makes the MDisks unmanaged, as shown in Example 6-72.
Cached data: When you run the svctask rmvdisk command, the SVC first confirms
that there is no outstanding dirty cached data for the volume that is being removed. If
uncommitted cached data still exists, the command fails with the following error
message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data there is to destage and how busy the I/O
subsystem is determine how long this command takes to complete.
You can check whether the volume has uncommitted data in the cache by using the
svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute.
This attribute has the following meanings:
empty: No modified data exists in the cache.
not_empty: Modified data might exist in the cache.
corrupt: Modified data might exist in the cache, but any modified data was lost.
Important: This step is the last step that you can perform and still safely back out of
any changes that you made.
Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the svctask detectmdisk command to rediscover the MDisks.
Re-create the volumes with the svctask mkvdisk command.
Remap the volumes back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
We are ready to access the LUNs from the AIX server. If all of the zoning, LUN masking, and
mapping were successful, our AIX server boots as though nothing happened. Complete the
following steps:
1. Run the cfgmgr -S command to discover the storage subsystem.
2. Use the lsdev -Cc disk command to verify the discovery of the new disk.
3. Remove the references to all of the old disks. Example 6-73 shows the removal by using
SDD.
350 Implementing the IBM System Storage SAN Volume Controller V7.4
vpath1 deleted
vpath2 deleted
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device
#
4. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
5. Mount your file systems by using the mount /MOUNT_POINT command.
You are ready to start your application.
6. To ensure that the MDisks are removed from the SVC, run the svctask detectmdisk
command. The MDisks first are discovered as offline. Then, they removed automatically
after the SVC determines that no volumes are associated with these MDisks.
To use the SVC for migration purposes only, complete the following steps:
1. Add the SVC to your SAN environment.
2. Prepare the SVC.
3. Depending on your operating system, unmount the selected LUNs or shut down the host.
As you can see, little downtime is required. If you prepare everything correctly, you can
reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host
does not hinder the performance while the migration progresses.
To use the SVC for storage migrations, complete the steps that are described in the following
sections:
6.5.2, “Adding the SAN Volume Controller between the host system and the DS 3400” on
page 259
6.5.6, “Migrating the volume from image mode to image mode” on page 283
6.5.7, “Removing image mode data from the IBM SAN Volume Controller” on page 291
To migrate from a fully allocated volume to a thin-provisioned volume, complete the following
steps:
1. Add the target thin-provisioned copy.
2. Wait for synchronization to complete.
3. Remove the source fully allocated copy.
By using this feature, clients can free managed disk space easily and make better use of their
storage without the need to purchase any other functions for the SVC.
Volume mirroring and thin-provisioned volume functions are included in the base virtualization
license. Clients with thin-provisioned storage on an existing storage system can migrate their
data under SVC management by using thin-provisioned volumes without having to allocate
more storage space.
Zero detect works only if the disk contains zeros. An uninitialized disk can contain anything,
unless the disk is formatted (for example, by using the -fmtdisk flag on the mkvdisk
command).
352 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-102 shows the thin-provisioned volume zero detect concept.
354 Implementing the IBM System Storage SAN Volume Controller V7.4
2. We add a thin-provisioned volume copy with the volume mirroring option by using the
addvdiskcopy command and the autoexpand parameter, as shown in Example 6-76.
356 Implementing the IBM System Storage SAN Volume Controller V7.4
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
3. We can split the volume mirror or remove one of the copies, which keeps the
thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the
rmvdiskcopy command.
If you need your copy as a thin-provisioned clone, we suggest that you use the
splitvdiskcopy command because that command generates a new volume and you can
map to any server that you want.
If you need your copy because you are migrating from a previously fully allocated volume
to go to a thin-provisioned volume without any effect on the server operations, we suggest
that you use the rmvdiskcopy command. In this case, the original volume name is kept and
it remains mapped to the same server.
Example 6-78 shows the splitvdiskcopy command.
358 Implementing the IBM System Storage SAN Volume Controller V7.4
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 1
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
We provide a basic technical overview and the benefits of each feature. For more information
about planning and configuration, see the following IBM Redbooks publications:
Easy Tier:
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
– IBM System Storage SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521
– IBM DS8000 Easy Tier, REDP-4667 (This concept is similar to SVC Easy Tier.)
Thin provisioning:
– Thin Provisioning in an IBM SAN or IP SAN Enterprise Environment, REDP-4265
– DS8000 Thin Provisioning, REDP-4554 (similar concept to SVC thin provisioning)
RtC:
– Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859
– Implementing IBM Real-time Compression in SAN Volume Controller and IBM
Storwize V7000, TIPS1083
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
All of these issues deal with data placement and relocation capabilities or data volume
reduction. Most of these challenges can be managed by having spare resources available
and by moving data, and by the use of data mobility tools or operating systems features (such
as host-level mirroring) to optimize storage configurations. However, all of these corrective
actions are expensive in terms of hardware resources, labor, and service availability.
Relocating data among the physical storage resources that dynamically or effectively reduces
the amount of data, that is, transparently to the attached host systems, is becoming
increasingly important.
362 Implementing the IBM System Storage SAN Volume Controller V7.4
SSD and flash array performance depends greatly on workload characteristics; therefore,
they need to be used with HDDs for optimal performance.
Choosing the correct mix of drives and the correct data placement is critical to achieve
optimal performance at low cost. Maximum value can be derived by placing “hot” data with
high I/O density and low response time requirements on SSDs or flash arrays, and targeting
HDDs for “cooler” data that is accessed more sequentially and at lower rates.
Easy Tier automates the placement of data among different storage tiers. Easy Tier can be
enabled for internal and external storage. This SVC feature boosts your storage infrastructure
performance to achieve optimal performance through a software, server, and storage
solution. Additionally, the new, no-charge feature called storage pool balancing, introduced in
the 7.3 SVC firmware version, automatically moves extents within the same storage tier, from
heavily loaded to less-loaded managed disks (MDisks). Storage pool balancing ensures that
your data is optimally placed among all disks within storage pools.
In general, the storage environments’ I/O is monitored at a volume level and the entire volume
is always placed inside one appropriate storage tier. Determining the amount of I/O, moving
part of the underlying volume to an appropriate storage tier, and reacting to workload
changes are too complex for manual operation. This area is where Easy Tier feature can be
used.
Easy Tier is a performance optimization function because it automatically migrates (or moves)
extents that belong to a volume between different storage tiers (Figure 7-2 on page 365) or
the same storage tier (Figure 7-4 on page 367). Because this migration works at the extent
level, it is often referred to as sublogical unit number (LUN) migration. The movement of the
extents is online and unnoticed from the host’s point of view. As a result of extent movement,
the volume no longer has all its data in one tier but rather in two or three tiers. Figure 7-2 on
page 365 shows the basic Easy Tier principle of operation.
364 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-2 Easy Tier
You can enable Easy Tier on a volume basis. Easy Tier monitors the I/O activity and latency
of the extents on all Easy Tier enabled volumes over a 24-hour period. Based on the
performance log, Easy Tier creates an extent migration plan and dynamically moves high
activity or hot extents to a higher disk tier within the same storage pool. Easy Tier also moves
extents whose activity dropped off, or cooled, from a higher disk tier MDisk back to a lower tier
MDisk. When Easy Tier is running in storage pool rebalance mode, it moves extents from
busy MDisks to less busy MDisks of the same type.
The individual SSDs in the storage that is managed by the SVC are combined into an array,
usually in RAID 10 or RAID 5 format. It is unlikely that RAID 6 SSD arrays are used because
of the double parity overhead, with two logical SSDs used for parity only. A LUN is created on
the array and then presented to the SVC as a normal MDisk.
The internal storage configuration of flash arrays can differ depending on an array vendor. But
regardless of the methods that are used to configure flash-based storage, the flash system
maps a volume to a host, in this case, the SVC. From the SVC perspective, the volume that is
presented from flash storage is also seen as a normal MDisk.
Starting with SVC DH8 nodes and firmware V7.3, up to two expansion drawers can be
connected to the SVC. Each drawer can have up to 24 SDDs and only SDD drives. The SDD
drives are then gathered together to form RAID arrays in the same way that RAID arrays are
formed in IBM Storwize systems.
After the creation of a RAID array, it appears as an MDisk of type ssd, which differs from
MDisks that are presented from external storage systems. Because the SVC does not know
from what kind of physical disks the presented MDisks are formed, the default MDisk type that
SVC adds to each external MDisk is enterprise. It is up to the users or administrators to
change the type of MDisks to ssd, enterprise, or nearline (NL).
To change a type of MDisk in the command-line interface (CLI), use the chmdisk command,
as shown in Example 7-1.
Note: The type of MDisk can also be changed in the GUI. From the animated menu on the
left side of the window, hover over Pools and select External Storage or MDisks by
Pools. Click the small plus sign (+) next to the storage controller name or storage pool
name, depending on whether you chose External Storage or MDisks by Pools to expand
the MDisks. Next, right-click an MDisk and choose Select Tier. Then, choose one of three
options to select the correct tier for your MDisk.
If you do not see the Tier column in the External Storage or MDisks by Pools view, right-click
the blue title row and select the Tier check box, as shown on Figure 7-3 on page 367.
366 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-3 Customizing the title row to show the Tier column
The SVC does not automatically detect the type of MDisks, except for MDisks that are formed
out of SSD drives from attached expansion drawers. Instead, all external MDisks are initially
put into the enterprise tier, by default. Then, the administrator must manually change the tier
of MDisks and add them to storage pools. Depending on what type of disks are gathered to
form a storage pool, we distinguish two types of storage pools: single-tier and multitier.
Adding SSDs to the pool means that more space also is now available for new volumes or
volume expansion.
Important: Image mode and sequential volumes are not candidates for Easy Tier
automatic data placement because all extents for those types of volumes must reside on
one, specific MDisk and cannot be moved.
The Easy Tier setting can be changed on a storage pool and volume basis. Depending on the
Easy Tier setting and the number of tiers in the storage pool, Easy Tier services might
function differently. Table 7-1 on page 369 shows possible combinations of Easy Tier settings.
368 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 7-1 EasyTier settings
Storage pool Easy Number of tiers in the Volume copy Easy Volume copy Easy
Tier setting storage pool Tier setting Tier status
On One On Balanced4
Table notes:
1. If the volume copy is in image or sequential mode or is being migrated, the volume copy
Easy Tier status is measured instead of active.
2. When the volume copy status is inactive, no Easy Tier functions are enabled for that
volume copy.
3. When the volume copy status is measured, the Easy Tier function collects usage
statistics for the volume but automatic data placement is not active.
4. When the volume copy status is balanced, the Easy Tier function enables
performance-based pool balancing for that volume copy.
5. When the volume copy status is active, the Easy Tier function operates in automatic
data placement mode for that volume.
6. The default Easy Tier setting for a storage pool is Auto, and the default Easy Tier setting
for a volume copy is On. Therefore, Easy Tier functions, except pool performance
balancing, are disabled for storage pools with a single tier. Automatic data placement
mode is enabled for all striped volume copies in a storage pool with two or more tiers.
Figure 7-6 on page 370 shows the naming convention and all supported combinations of
storage tiering that are used by Easy Tier.
370 Implementing the IBM System Storage SAN Volume Controller V7.4
When Easy Tier is enabled, it performs the following actions among three tiers, as presented
in Figure 7-6 on page 370:
Promote
This action moves the relevant hot extents to a higher performing tier.
Swap
This action exchanges a cold extent in an upper tier with a hot extent in a lower tier.
Warm demote:
– Warm demote prevents performance overload of a tier by demoting a warm extent to
the lower tier.
– This action is triggered when bandwidth or IOPS exceeds a predefined threshold.
Demote or cold demote
The coldest data is moved to a lower HDD tier. This action is only supported between HDD
tiers.
Expanded cold demote
This action demotes the appropriate sequential workloads to the lowest tier to better use
Nearline disk bandwidth.
Storage pool balancing:
– This action redistributes extents within a tier to balance utilization across MDisks for
maximum performance.
– Storage pool balancing moves hot extents from higher-utilized MDisks to lower-utilized
MDisks.
– Storage pool balancing exchanges extents between higher-utilized MDisks and
lower-utilized MDisks.
Easy Tier attempts to migrate the most active volume extents up to SSD first.
A previous migration plan and any queued extents that are not yet relocated are
abandoned.
Note: Extent migration occurs only between adjacent tiers. In a three-tiered storage pool,
Easy Tier will not move extents from SSDs directly to NL-SAS and vice versa without
moving the extents first to SAS drives.
Easy Tier extent migration types are presented on Figure 7-7 on page 372.
372 Implementing the IBM System Storage SAN Volume Controller V7.4
Automatic data placement or extent migration mode
In automatic data placement or extent migration operating mode, the storage pool parameter
-easytier on or auto must be set, and the volumes in the pool must have -easytier on. The
storage pool must also contain MDisks with different disk tiers (a multitiered storage pool).
Dynamic data movement is not apparent to the host server and application users of the data,
other than providing improved performance. Extents are automatically migrated, as explained
in “Implementation rules” on page 373.
The statistic summary file is also created in this mode. This file can be offloaded for input to
the advisor tool. The tool produces a report on the extents that are moved to a higher tier and
a prediction of performance improvement that can be gained if more higher-tier disks are
available.
Options: The Easy Tier function can be turned on or off at the storage pool level and at the
volume level.
The process will automatically balance existing data when new MDisks are added into an
existing pool even if the pool only contains a single type of drive. This does not mean that the
process will migrate extents from existing MDisks to achieve even extent distribution among
all, old and new, MDisks in the storage pool. Easy Tier RB within a tier migration plan is based
on performance and not the capacity of underlying MDisks.
Note: Storage pool balancing can be used to balance extents when mixing different size
disks of the same performance tier. For example, when adding larger capacity drives to a
pool with smaller capacity drives of the same class, storage pool balancing redistributes
the extents to take advantage of the additional performance of the new MDisks.
Implementation rules
Remember the following implementation and operational rules when you use the IBM System
Storage Easy Tier function on the SVC:
Easy Tier automatic data placement is not supported on image mode or sequential
volumes. I/O monitoring for these volumes is supported, but you cannot migrate extents on
these volumes unless you convert image or sequential volume copies to striped volumes.
Automatic data placement and extent I/O activity monitors are supported on each copy of
a mirrored volume. Easy Tier works with each copy independently of the other copy.
If possible, the SVC creates volumes or volume expansions by using extents from MDisks
from the HDD tier. However, it uses extents from MDisks from the SSD tier, if necessary.
When a volume is migrated out of a storage pool that is managed with Easy Tier, Easy Tier
automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated, even if the volume is between
pools that both have Easy Tier automatic data placement enabled. Automatic data placement
for the volume is re-enabled when the migration is complete.
Limitations
When you use IBM System Storage Easy Tier on the SVC, Easy Tier has the following
limitations:
Removing an MDisk by using the -force parameter
When an MDisk is deleted from a storage pool with the -force parameter, extents in use
are migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
Migrating extents
When Easy Tier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts CLI command on that volume.
Migrating a volume to another storage pool
When the SVC migrates a volume to a new storage pool, Easy Tier automatic data
placement between the two tiers is temporarily suspended. After the volume is migrated to
its new storage pool, Easy Tier automatic data placement between the generic SSD tier
and the generic HDD tier resumes for the moved volume, if appropriate.
When the SVC migrates a volume from one storage pool to another, it attempts to migrate
each extent to an extent in the new storage pool from the same tier as the original extent.
In several cases, such as where a target tier is unavailable, the other tier is used. For
example, the generic SSD tier might be unavailable in the new storage pool.
Migrating a volume to image mode
Easy Tier automatic data placement does not support image mode. When a volume with
Easy Tier automatic data placement mode that is active is migrated to image mode, Easy
Tier automatic data placement mode is no longer active on that volume.
Image mode and sequential volumes cannot be candidates for automatic data placement;
however, Easy Tier supports evaluation mode for image mode volumes.
374 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 7-2 Changing the EasyTier setting
IBM_2145:ITSO_SVC1:superuser>lsvdisk test01
id 0
name test01
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name v7000_1_gen1_pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018E92083000000000000000
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name v7000_1_gen1_pool
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name v7000_1_gen1_pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier off
IBM_2145:ITSO_SVC1:superuser>lsvdisk test01
id 0
name test01
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name v7000_1_gen1_pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018E92083000000000000000
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name v7000_1_gen1_pool
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
376 Implementing the IBM System Storage SAN Volume Controller V7.4
mdisk_grp_name v7000_1_gen1_pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 10.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name v7000_1_gen1_pool
IBM_2145:ITSO_SVC1:superuser>lsmdiskgrp v7000_1_gen2_pool
id 1
name v7000_1_gen2_pool
status online
mdisk_count 3
vdisk_count 0
capacity 300.00GB
extent_size 1024
free_capacity 300.00GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0
easy_tier auto
easy_tier_status balanced
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 3
tier_capacity 300.00GB
tier_free_capacity 300.00GB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
IBM_2145:ITSO_SVC1:superuser>lsmdiskgrp v7000_1_gen2_pool
id 1
name v7000_1_gen2_pool
status online
mdisk_count 3
vdisk_count 0
capacity 300.00GB
extent_size 1024
free_capacity 300.00GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0
easy_tier off
easy_tier_status inactive
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 3
tier_capacity 300.00GB
tier_free_capacity 300.00GB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id
site_name
parent_mdisk_grp_id 1
parent_mdisk_grp_name v7000_1_gen2_pool
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
encrypt no
378 Implementing the IBM System Storage SAN Volume Controller V7.4
7.2.8 Monitoring tools
IBM Storage Tier Advisor Tool (STAT) is a Windows console application that analyzes heat
data files that are produced by Easy Tier. STAT produces a graphical display of the amount of
“hot” data per volume and predicts how additional flash drive (SSD) capacity, enterprise
drives, and nearline drives might improve the performance for the system by storage pool.
Heat data files are produced approximately once a day (that is, every 24 hours) when Easy
Tier is active on one or more storage pools and summarizes the activity per volume since the
prior heat data file was produced. On the SVC and Storwize serial products, the heat data file
is in the /dumps directory on the configuration node and named
dpa_heat.node_name.time_stamp.data.
Any existing heat data file is erased after seven days. The file must be offloaded by the user
and STAT must be invoked from a Windows command prompt console with the file specified
as a parameter. The user can also specify the output directory. STAT creates a set of HTML
files and the user can then open the resulting index.html in a browser to view the results.
Updates to STAT for SVC 7.3 added additional capability for reporting. As a result, when STAT
is run on a heat map file, an additional three CSV files are created and placed in the
Data_files directory.
Figure 7-8 shows the CSV files that are highlighted in the Data_files directory after running
STAT over an SVC heatmap.
In addition to STAT, SVC 7.3 code now has an additional utility, which is a Microsoft SQL file
for creating additional graphical reports about the Easy Tier workload. The IBM STAT
Charting Utility takes the output of the three CSV files and turns them into graphs for simple
reporting.
380 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-10 STAT Charting Utility Daily Summary report
The STAT Charting Utility can be downloaded from the IBM Support website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251
Thin provisioning presents more storage space to the hosts or servers that are connected to
the storage system than is available on the storage system. The IBM SVC has this capability
for Fibre Channel and iSCSI provisioned volumes.
382 Implementing the IBM System Storage SAN Volume Controller V7.4
You can imagine thin provisioning as the same process as when airlines sell more tickets on a
flight than physical seats are available, assuming that some passengers do not appear at
check-in. They do not assign actual seats at the time of sale, which avoids each client having
a claim on a specific seat number. The same concept applies to thin provisioning (airline)
SVC (plane) and its volumes (seats). The storage administrator (airline ticketing system) must
closely monitor the allocation process and set correct thresholds.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the
capacity of the volume that is reported to other SVC components (such as FlashCopy or
remote copy) and to the hosts. For example, you can create a volume with a real capacity of
only 100 GB but a virtual capacity of 1 TB. The actual space that is used by the volume on the
SVC will be 100 GB but hosts will see a 1 TB volume.
A directory maps the virtual address space to the real address space. The directory and the
user data share the real capacity.
A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and the volume must
expand.
Warning threshold: Enable the warning threshold (by using email or a Simple Network
Management Protocol (SNMP) trap) when you are working with thin-provisioned volumes.
You can enable the warning threshold on the volume, and on the storage pool side,
especially when you do not use the autoexpand mode. Otherwise, the thin volume goes
offline if it runs out of space.
Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by
the current virtual capacity, and the contingency capacity is recalculated.
Space allocation
When a thin-provisioned volume is created, a small amount of the real capacity is used for
initial metadata. Write I/Os to the grains of the thin volume (that were not previously written to)
cause grains of the real capacity to be used to store metadata and user data. Write I/Os to the
grains (that were previously written to) update the grain where data was previously written.
Grain definition: The grain is defined when the volume is created and can be 32 KB,
64 KB, 128 KB, or 256 KB.
Smaller granularities can save more space, but they have larger directories. When you use
thin-provisioning with FlashCopy, specify the same grain size for the thin-provisioned volume
and FlashCopy.
To create a thin-provisioned volume, choose Create Volumes from the Volumes menu in a
dynamic menu and select Thin-Provision, as shown in Figure 7-13. Enter the required
capacity and volume name.
In the Advanced Settings menu of this wizard, you can set virtual and real capacity, warning
thresholds, and grain size, as shown in Figure 7-14 on page 385.
384 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-14 Advanced options
For more information about the configuration procedure for thin-provisioned volumes, see
7.3.1, “Configuring a thin-provisioned volume” on page 383.
Thin-provisioned volumes require more CPU processing so that the performance per I/O
Group might be slower. Use the striping policy to spread thin-provisioned volumes across
many storage pools as with normal, generic, fully allocated volumes.
Important: Do not use thin-provisioned volumes where high I/O performance is required.
Thin-provisioned volumes save capacity only if the host server does not write to whole
volumes. Whether the thin-provisioned volume works well partly depends on how the file
system allocated the space. Certain file systems (for example, New Technology File System
[NTFS]) write to the whole volume before they overwrite the deleted files. Other file systems
reuse space in preference to allocating new space.
File system problems can be moderated by tools, such as “defrag,” or by managing storage by
using host Logical Volume Managers (LVM).
Note: Starting with SVC firmware V7.3, all of the cache subsystem architecture was
redesigned. Now, thin-provisioned volumes can benefit from lower cache functions (such
as coalescing writes or prefetching), which greatly improve performance.
Table 7-2 Maximum thin-provisioned volume virtual capacities for an extent size
Extent size in MB Maximum volume real capacity Maximum thin-provisioned
in GB volume virtual capacity in GB
16 2,048 2,000
32 4,096 4,000
64 8,192 8,000
Table 7-3 on page 387 shows the maximum thin-provisioned volume virtual capacities for a
grain size.
386 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 7-3 Maximum thin-provisioned volume virtual capacities for a grain size
Grain size in KB Maximum thin-provisioned volume virtual capacity
in GB
32 260,000
64 520,000
128 1,040,000
256 2,080,000
For more information and detailed performance considerations for configuring thin
provisioning, see IBM System Storage SAN Volume Controller Best Practices and
Performance Guidelines, SG24-7521. You can also go to the IBM SAN Volume Controller 7.4
Knowledge Center at this website:
http://www-01.ibm.com/support/knowledgecenter/STPVGU_7.4.0/com.ibm.storage.svc.con
sole.740.doc/svc_ichome_740.html?cp=STPVGU%2F0
General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home
directories, CAD/CAM, oil and gas geoseismic data, and log data. Storing such types of data
in compressed volumes provides immediate capacity reduction to the overall consumed
space. More space can be provided to users without any change to the environment.
Many file types can be stored in general-purpose servers. However, for practical information,
the estimated compression ratios are based on actual field experience. Expected
compression ratios are 50% - 60%.
File systems that contain audio, video files, and compressed files are not good candidates for
compression. The overall capacity savings on these file types are minimal.
Databases
Database information is stored in table space files. High compression ratios are common in
database volumes. Examples of databases that can greatly benefit from RtC are IBM DB2®,
Oracle, and Microsoft SQL Server. Expected compression ratios are 50% - 80%.
Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of storage
space, with more virtual server images and backups kept online. The use of compression
reduces the storage requirements at the source.
Examples of virtualization solutions that can greatly benefit from RtC are VMware, Microsoft
Hyper-V, and kernel-based virtual machine (KVM). Expected compression ratios are 45% -
75%.
Tip: Virtual machines (VMs) with file systems that contain compressed files are not good
candidates for compression.
388 Implementing the IBM System Storage SAN Volume Controller V7.4
7.4.2 Real-time Compression concepts
The Random Access Compression Engine (RACE) technology is based on over 50 patents
that are not primarily about compression. Instead, they define how to make industry standard
Lempel-Ziv (LZ) compression of primary storage operate in real time and allow random
access. The primary intellectual property behind this technology is the RACE component.
At a high level, the IBM RACE component compresses data that is written into the storage
system dynamically. This compression occurs transparently, so Fibre Channel and iSCSI
connected hosts are not aware of the compression. RACE is an inline compression
technology, which means that each host write is compressed as it passes through the SVC
software to the disks. This technology has a clear benefit over other compression
technologies that are post-processing based. These technologies do not provide immediate
capacity savings; therefore, they are not a good fit for primary storage workloads, such as
databases and active data set applications.
RACE is based on the Lempel-Ziv lossless data compression algorithm and operates in a
real-time method. When a host sends a write request, the request is acknowledged by the
write cache of the system, and then staged to the storage pool. As part of its staging, the
write request passes through the compression engine and is then stored in compressed
format onto the storage pool. Therefore, writes are acknowledged immediately after they are
received by the write cache with compression occurring as part of the staging to internal or
external physical storage.
Capacity is saved when the data is written by the host because the host writes are smaller
when they are written to the storage pool.
IBM RtC is a self-tuning solution, which is similar to the SVC system. It is adapting to the
workload that runs on the system at any particular moment.
Compression utilities
Compression is probably most known to users because of the widespread use of
compression utilities, such as the zip and gzip utilities. At a high level, these utilities take a file
as their input, and parse the data by using a sliding window technique. Repetitions of data are
detected within the sliding window history, most often 32 KB. Repetitions outside of the
window cannot be referenced. Therefore, the file cannot be reduced in size unless data is
repeated when the window “slides” to the next 32 KB slot.
Figure 7-15 on page 390 shows compression that uses a sliding window, where the first two
repetitions of the string “ABCDEF” fall within the same compression window, and can
therefore be compressed by using the same dictionary. The third repetition of the string falls
outside of this window and therefore cannot be compressed by using the same compression
dictionary as the first two repetitions, reducing the overall achieved compression ratio.
However, drawbacks exist to this approach. An update to a chunk requires a read of the
chunk followed by a recompression of the chunk to include the update. The larger the chunk
size chosen, the heavier the I/O penalty to recompress the chunk. If a small chunk size is
chosen, the compression ratio is reduced because the repetition detection potential is
reduced.
Figure 7-16 on page 391 shows an example of how the data is broken into fixed size chunks
(in the upper-left corner of the figure). It also shows how each chunk gets compressed
independently into variable length compressed chunks (in the upper-right side of the figure).
The resulting compressed chunks are stored sequentially in the compressed output.
390 Implementing the IBM System Storage SAN Volume Controller V7.4
Data
1
3
4
Compressed
Data
1 2
3
4 5
6
7
This method enables an efficient and consistent method to index the compressed data
because the data is stored in fixed-size containers.
3 3
4 4
5 5
6 6
Compressed
Data
1
2
3
4
5
6
Location-based compression
Both compression utilities and traditional storage systems compression compress data by
finding repetitions of bytes within the chunk that is being compressed. The compression ratio
of this chunk depends on how many repetitions can be detected within the chunk. The
number of repetitions is affected by how much the bytes stored in the chunk are related to
each other. The relationship between bytes is driven by the format of the object. For example,
an office document might contain textual information, and an embedded drawing, such as this
page. Because the chunking of the file is arbitrary, it has no notion of how the data is laid out
within the document. Therefore, a compressed chunk can be a mixture of the textual
information and part of the drawing. This process yields a lower compression ratio because
the different data types mixed together cause a suboptimal dictionary of repetitions. That is,
fewer repetitions can be detected because a repetition of bytes in a text object is unlikely to be
found in a drawing.
This traditional approach to data compression is also called location-based compression. The
data repetition detection is based on the location of data within the same chunk.
This challenge was also addressed with the predecide mechanism that was introduced in
version 7.1.
392 Implementing the IBM System Storage SAN Volume Controller V7.4
Predecide mechanism
Certain data chunks have a higher compression ratio than others. Compressing some of the
chunks saves little space but still requires resources, such as CPU and memory. To avoid
spending resources on uncompressible data, and to provide the ability to use a different,
more effective (in this particular case) compression algorithm, IBM invented a predecide
mechanism that was first introduced in version 7.1.
The chunks that are below a certain compression ratio are skipped by the compression
engine, therefore saving CPU time and memory processing. Chunks that are decided not to
be compressed with the main compression algorithm, but that still can be compressed well
with another algorithm, will be marked and processed. The result can vary because predecide
does not check the entire block, only a sample of it.
Figure 7-19 on page 394 shows how the detection mechanism works.
Temporal compression
RACE offers a technology leap, which is called temporal compression, beyond location-based
compression.
When host writes arrive to RACE, they are compressed and fill fixed size chunks that are also
called compressed blocks. Multiple compressed writes can be aggregated into a single
compressed block. A dictionary of the detected repetitions is stored within the compressed
block. When applications write new data or update existing data, the data is typically sent
from the host to the storage system as a series of writes. Because these writes are likely to
originate from the same application and be from the same data type, more repetitions are
usually detected by the compression algorithm.
Figure 7-19 shows (in the upper part) how three writes sent one after the other by a host end
up in different chunks. They get compressed in different chunks because their location in the
volume is not adjacent. This approach yields a lower compression ratio because the same
data must be compressed non-natively by using three separate dictionaries. When the same
three writes are sent through RACE (in the lower part of the figure), the writes are
compressed together by using a single dictionary. This approach yields a higher compression
ratio than location-based compression.
1 Location
Compression
Window
2
# = Host write
Temporal
Compression
Window
1 2 3
Time
Figure 7-19 Location-based versus temporal compression
394 Implementing the IBM System Storage SAN Volume Controller V7.4
RACE technology is implemented into the SVC thin provisioning layer, and it is an organic
part of the stack. The SVC software stack is shown in Figure 7-20. Compression is
transparently integrated with existing system management design. All of the SVC advanced
features are supported on compressed volumes. You can create, delete, migrate, map
(assign), and unmap (unassign) a compressed volume as though it were a fully allocated
volume. In addition, you can use RtC with Easy Tier on the same volumes. This compression
method provides nondisruptive conversion between compressed and decompressed
volumes. This conversion provides a uniform user experience and eliminates the need for
special procedures when dealing with compressed volumes.
Figure 7-20 RACE integration within the SVC 7.4 software stack
396 Implementing the IBM System Storage SAN Volume Controller V7.4
Then, you select a compressed type of volume and select a storage pool where you want to
place the new copy (Figure 7-22). If you do not want to move the volume to different storage,
select the same storage pool for the existing, original volume copy.
After the copies are fully synchronized, you can delete the original, uncompressed copy, as
shown on Figure 7-23 on page 398.
As a result, you compressed data on the existing volume, as shown on Figure 7-24. This
process is nondisruptive, so the data remains online and accessible by applications and
users.
This capability enables clients to regain space from the storage pool, which can then be
reused for other applications.
398 Implementing the IBM System Storage SAN Volume Controller V7.4
With the virtualization of external storage systems, the ability to compress already stored data
significantly enhances and accelerates the benefit to users. This capability allows them to see
a tremendous return on their SVC investment. On the initial purchase of an SVC with RtC,
clients can defer their purchase of new storage. When new storage needs to be acquired, IT
purchases less of the required storage before compression.
Important: The SVC reserves some of its resources, such as CPU cores and RAM
memory, after you create one compressed volume or volume copy. This reserve might
affect your system performance if you do not plan for the reserve in advance.
The configuration is similar to generic volumes and not apparent to users. From the Volumes
menu in the dynamic menu, choose Create Volumes and select Compressed, as shown in
Figure 7-25. Choose the storage pool that you want to use and enter the required capacity
and volume name.
The summary line at the bottom of the wizard provides information about the allocated
(virtual) capacity and the real capacity that data uses on this volume. In our example, we
defined a 10 GiB volume, but the real capacity is only 204.80 MiB because no data exists
from the host.
When the compressed volume is configured, you can directly map it to the host or map it later
on request.
In the previous model, the RtC software component sits below the single-level read/write
cache. The benefit of this model is that the upper-level read/write cache masks from the host
any latency that is introduced by the RtC software component. However, in this single-level
caching model, the destaging of writes for compressed I/Os to disk might not be optimal for
certain workloads because the RtC component is interacting direct with uncached storage.
Figure 7-26 on page 401 depicts compression code in the current SVC software stack.
400 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-26 Real-time Compression code in the SVC software stack
In the new, dual-level caching model, the RtC software component sits below the upper-level
fast write cache and above the lower-level advanced read/write cache. Several advantages
are available for this dual-level model regarding RtC:
Host writes, whether to compressed or decompressed volumes, are still serviced directly
through the upper-level write cache, preserving low host write I/O latency. Response time
can improve with this model as the upper cache flushes less data to RACE more
frequently.
The performance of the destaging of compressed write I/Os to storage improves because
these I/Os are now destaged through the advanced lower-level cache, as opposed to
directly to storage.
The existence of a lower-level write cache below the RtC component in the software stack
allows for the coalescing of compressed writes, and as a result, a reduction in back-end
I/Os due to the ability to perform full-stride writes for compressed data.
The existence of a lower-level read cache below the RtC component in the software stack
allows the temporal locality nature of RtC to benefit from pre-fetching from the back-end
storage.
The main (lower-level) cache now stores compressed data for compressed volumes,
increasing the effective size of the lower-level cache.
Support for larger numbers of compressed volumes is available.
Note: To use the RtC feature on 2145-DH8 nodes, the secondary CPU is required.
Note: To use the RtC feature on 2145-DH8 nodes, the additional 32 GB memory option is
required.
Note: To use the RtC feature on 2145-DH8 nodes, at least one Quick Assist compression
acceleration card is required. With a single card, the maximum number of compressed
volumes per I/O Group is 200. With the addition of a second Quick Assist card, the
maximum number of compressed volumes per I/O Group is 512.
For more information about the compression accelerator cards, see Chapter 3, “Planning and
configuration” on page 73.
402 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-27 Dual RACE enhancement
Thanks to dual RACE enhancement, the compression performance can be boosted up to two
times for compressed workloads when compared to earlier SVC code.
To take advantage of dual RACE, several software and hardware requirements must be met:
The SVC software must be at level 7.4.
Only SVC 2145-DH8 nodes are supported.
A second eight-core CPU must be installed per SVC node.
An additional 32 GB must be installed per SVC node.
At least one Coleto Creek acceleration card must be installed per SVC node. The second
acceleration card is not required.
Note: We recommend using two acceleration cards for the best performance.
When using the dual RACE feature, the acceleration cards are shared between RACE
components, which means that the acceleration cards are used simultaneously by both
RACE components. The rest of resources, such as CPU cores and RAM memory, are evenly
divided between the RACE components. You do not need to manually enable dual RACE;
dual RACE triggers automatically when all minimal software and hardware requirements are
met. If the SVC is compression capable but the minimal requirements for dual RACE are not
met, only one RACE instance is used (as in the previous versions of the SVC code).
Figure 7-28 on page 404 shows how the SVC resources are split when using compression.
For more information about RtC and its deployment in the IBM SVC, see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859.
404 Implementing the IBM System Storage SAN Volume Controller V7.4
8
In Chapter 10, “SAN Volume Controller operations using the GUI” on page 655, we explain
how to use the GUI and Advanced Copy Services.
You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and cache and, therefore, the copy is not apparent
to the host.
Important: Because FlashCopy operates at the block level below the host operating
system and cache, those levels do need to be flushed for consistent FlashCopies.
While the FlashCopy operation is performed, the source volume is frozen briefly to initialize
the FlashCopy bitmap and then I/O can resume. Although several FlashCopy options require
the data to be copied from the source to the target in the background, which can take time to
complete, the resulting data on the target volume is presented so that the copy appears to
complete immediately. This process is performed by using a bitmap (or bit array), which tracks
changes to the data after the FlashCopy is started, and an indirection layer, which allows data
to be read from the source volume transparently.
The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples:
Rapidly creating consistent backups of dynamically changing data
Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
Rapidly creating copies of production data sets for application development and testing
Rapidly creating copies of production data sets for auditing purposes and data mining
Rapidly creating copies of production data sets for quality assurance
Regardless of your business needs, FlashCopy within the SVC is flexible and offers a broad
feature set, which makes FlshCopy applicable to many scenarios.
406 Implementing the IBM System Storage SAN Volume Controller V7.4
After the FlashCopy is performed, the resulting image of the data can be backed up to tape,
as though it were the source system. After the copy to tape is complete, the image data is
redundant and the target volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. The use of these
methods puts less load on your infrastructure.
When FlashCopy is used for backup purposes, the target data usually is managed as
read-only at the operating system level. This approach provides extra security by ensuring
that your target data was not modified and remains true to the source.
This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.
In addition to the restore option, which copies the original blocks from the target volume to
modified blocks on the source volume, the target can be used to perform a restore of
individual files; you make the target available on a host. We suggest that you do not make the
target available to the source host because seeing doubles of disks causes problems for most
host operating systems. Copy the files to the source through the normal host data copy
methods for your environment.
This method differs from the other migration methods, which are described later in this
chapter. Common uses for this capability are host and back-end storage hardware refreshes.
To ensure the integrity of the copy that is made, it is necessary to flush the host operating
system and application cache for any outstanding reads or writes before the FlashCopy
operation is performed. Failing to flush the host operating system and application cache
produces what is referred to as a crash consistent copy. The resulting copy requires the same
type of recovery procedure, such as log replay and file system checks, that is required
following a host crash. FlashCopies that are crash consistent often can be used following file
system and application recovery procedures.
Note: Although the best way to perform FlashCopy is to flush host cache first, certain
companies, such as Oracle, support using snapshots without it, as stated in Metalink note
604683.1.
Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare for a FlashCopy operation. When this type of facility is unavailable, the host
cache must be flushed manually by quiescing the application and unmounting the file system
or drives.
Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that application’s data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases because the database maintains strict control over I/O. This method is opposed
to flushing data from both the application and the backing database (which is the
recommended method because it is safer). However, this method can be used when
facilities do not exist or your environment includes time sensitivity.
408 Implementing the IBM System Storage SAN Volume Controller V7.4
The source volume and target volume are available (almost) immediately following the
FlashCopy operation.
The source and target volumes must be the same “virtual” size.
The source and target volumes must be on the same SVC clustered system.
The source and target volumes do not need to be in the same I/O Group or storage pool.
The storage pool extent sizes can differ between the source and target.
The source volumes can have up to 256 target volumes (Multiple Target FlashCopy).
The target volumes can be the source volumes for other FlashCopy relationships
(cascaded FlashCopy).
Consistency Groups are supported to enable FlashCopy across multiple volumes in the
same time.
Up to 255 FlashCopy Consistency Groups are supported per system.
Up to 512 FlashCopy mappings can be placed in one Consistency Group.
The target volume can be updated independently of the source volume.
Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both
nodes of the SVC I/O Group to prevent a single point of failure.
FlashCopy mapping and Consistency Groups can be automatically withdrawn after the
completion of the background copy.
Thin-provisioned FlashCopy (or Snapshot in the GUI) use disk space only when updates
are made to the source or target data and not for the entire capacity of a volume copy.
FlashCopy licensing is based on the virtual capacity of the source volumes.
Incremental FlashCopy copies all of the data when you first start FlashCopy and then only
the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy
can substantially reduce the time that is required to re-create an independent image.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original
copy operation to complete.
The maximum number of supported FlashCopy mappings is 4096 per SVC system.
The size of the source and target volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.
A key advantage of the SVC Multiple Target Reverse FlashCopy function is that the reverse
FlashCopy does not destroy the original target, which allows processes by using the target,
such as a tape backup, to continue uninterrupted.
The SVC also provides the ability to create an optional copy of the source volume to be made
before the reverse copy operation starts. This ability to restore back to the original source
data can be useful for diagnostic purposes.
The production disk is instantly available with the backup data. Figure 8-1 shows an example
of Reverse FlashCopy.
Regardless of whether the initial FlashCopy map (volume X → volume Y) is incremental, the
Reverse FlashCopy operation copies the modified data only.
Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and
adding them to a new reverse Consistency Group. Consistency Groups cannot contain more
than one FlashCopy map with the same target volume.
IBM Tivoli Storage FlashCopy Manager provides fast application-aware backups and restores
using advanced point-in-time image technologies in the SVC. In addition, it provides an
optional integration with IBM Tivoli Storage Manager for the long-term storage of snapshots.
410 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-2 shows the integration of Tivoli Storage Manager and FlashCopy Manager from a
conceptual level.
Figure 8-2 Tivoli Storage Manager for Advanced Copy Services features
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for
Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli
FlashCopy Manager, you can coordinate and automate host preparation steps before you
issue FlashCopy start commands to ensure that a consistent backup of the application is
made. You can put databases into hot backup mode and flush the file system cache before
starting the FlashCopy.
FlashCopy Manager also allows for easier management of on-disk backups that use
FlashCopy, and provides a simple interface to perform the “reverse” operation.
Released December 2013, IBM Tivoli FlashCopy Manager V4.1 adds support for VMware 5.5
and vSphere environments with Site Recovery Manager (SRM), with instant restore for
VMware Virtual Machine File System (VMFS) data stores. This release also integrates with
IBM Tivoli Storage Manager for Virtual Environments, and it allows backup of point-in-time
images into the Tivoli Storage Manager infrastructure for long-term storage.
The addition of VMware vSphere brings support and application awareness for FlashCopy
Manager to the following applications:
Microsoft Exchange and Microsoft SQL Server, including SQL Server 2012 Availability
Groups
IBM DB2 and Oracle databases, for use either with or without SAP environments
IBM General Parallel File System (GPFS) software snapshots for DB2 pureScale®
Other applications supported through script customizing
For more information about IBM Tivoli FlashCopy Manager, see this website:
http://www.ibm.com/software/products/en/tivostorflasmana/
412 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-4 FlashCopy Manager integration with remote copy services
Before you start a FlashCopy (regardless of the type and specified options), you must issue a
prestartfcmap or prestartfcconsistgrp, which puts the SVC cache into write-through mode
and flushes the I/O that is currently bound for your volume. After FlashCopy is started, an
effective copy of a source volume to a target volume is created. The content of the source
volume is presented immediately on the target volume, and the original content of the target
volume is lost. This FlashCopy operation is also referred to as a time-zero copy (T0).
Note: Instead of using prestartfcmap or prestartfcconsistgrp, you can also use the
-prep parameter in the startfcmap or startfcconsistgrp command to prepare and start
FlashCopy in one step.
The source and target volumes are available for use immediately after the FlashCopy
operation. The FlashCopy operation creates a bitmap that is referenced and maintained to
direct I/O requests within the source and target relationship. This bitmap is updated to reflect
the active block locations while data is copied in the background from the source to the target
and updates are made to the source.
Important: As with any point-in-time copy technology, you are bound by operating system
and application requirements for interdependent data and the restriction to an entire
volume.
The source and target volumes must belong to the same SVC system, but they do not have to
be in the same I/O Group or storage pool. FlashCopy associates a source volume to a target
volume through FlashCopy mapping.
To become members of a FlashCopy mapping, the source and target volumes must be the
same size. Volumes that are members of a FlashCopy mapping cannot have their size
increased or decreased while they are members of the FlashCopy mapping.
A FlashCopy mapping is the act of creating a relationship between a source volume and a
target volume. FlashCopy mappings can be stand-alone or a member of a Consistency
Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a
stand-alone mapping or a Consistency Group.
414 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-6 shows the concept of FlashCopy mapping.
Figure 8-7 also shows four targets and mappings that are taken from a single source, with
their interdependencies. In this example, Target 1 is the oldest (as measured from the time
that it was started) through to Target 4, which is the newest. The ordering is important
because of how the data is copied when multiple target volumes are defined and because of
the dependency chain that results.
A write to the source volume does not cause its data to be copied to all of the targets. Instead,
it is copied to the newest target volume only (Target 4 in Figure 8-7). The older targets refer to
new targets first before referring to the source.
From the point of view of an intermediate target disk (neither the oldest nor the newest), it
treats the set of newer target volumes and the true source volume as a type of composite
source.
For more information about Multiple Target FlashCopy, see 8.4.6, “Interaction and
dependency between multiple target FlashCopy mappings” on page 420.
When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs the operation on all FlashCopy mappings that are
contained within the Consistency Group at the same time.
Figure 8-8 shows a Consistency Group that includes two FlashCopy mappings.
416 Implementing the IBM System Storage SAN Volume Controller V7.4
Dependent writes
To show why it is crucial to use Consistency Groups when a data set spans multiple volumes,
consider the following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to perform the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur before the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes 1 and 3 but not 2 because the FlashCopy of the
database volume occurred before the write was completed.
In this case, if the database was restarted by using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the volume
with the database file was started (the bitmap was created) before the write completed to the
volume. Therefore, the transaction is lost and the integrity of the database is in question.
To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, a FlashCopy operation must be performed on multiple volumes as an
atomic operation. To accomplish this method, the SVC supports the concept of Consistency
Groups.
A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings. The maximum
number of FlashCopy mappings that is supported by the SVC system V7.4 is 4,096.
FlashCopy commands can then be issued to the FlashCopy Consistency Group and,
therefore, simultaneously for all of the FlashCopy mappings that are defined in the
Consistency Group.
For example, when a FlashCopy start command is issued to the Consistency Group, all of
the FlashCopy mappings in the Consistency Group are started at the same time. This
simultaneous start results in a point-in-time copy that is consistent across all of the FlashCopy
mappings that are contained in the Consistency Group.
If a particular volume is the source volume for multiple FlashCopy mappings, you might want
to create separate Consistency Groups to separate each mapping of the same source
volume. Regardless of whether the source volume with multiple target volumes is in the same
Consistency Group or in separate Consistency Groups, the resulting FlashCopy produces
multiple identical copies of the source data.
Maximum configurations
Table 8-1 on page 418 lists the FlashCopy properties and maximum configurations.
FlashCopy targets per source 256 This maximum is the number of FlashCopy
mappings that can exist with the same source
volume.
FlashCopy mappings per system 4,096 The number of mappings is no longer limited by
the number of volumes in the system, so the
FlashCopy component limit applies.
FlashCopy Consistency Groups 255 This maximum is an arbitrary limit that is policed
per system by the software.
FlashCopy volume capacity per 1,024 TB This maximum is a limit on the quantity of
I/O Group FlashCopy mappings that are using bitmap space
from this I/O Group. This maximum configuration
uses all 512 MB of bitmap space for the I/O Group
and allows no MM and GM bitmap space. The
default is 40 TB.
FlashCopy mappings per 512 This limit is because of the time that is taken to
Consistency Group prepare a Consistency Group with many
mappings.
To show how the FlashCopy indirection layer works, we examine what happens when a
FlashCopy mapping is prepared and then started.
When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush the write cache to the source volume or volumes that are part of a Consistency
Group.
2. Put cache into write-through mode on the source volumes.
3. Discard cache for the target volumes.
4. Establish a sync point on all of the source volumes in the Consistency Group (which
creates the FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source volumes and target
volumes.
6. Enable cache on the source volumes and target volumes.
FlashCopy provides the semantics of a point-in-time copy by using the indirection layer, which
intercepts I/O that is directed at the source or target volumes. The act of starting a FlashCopy
mapping causes this indirection layer to become active in the I/O path, which occurs
automatically across all FlashCopy mappings in the Consistency Group. The indirection layer
then determines how each I/O is to be routed, which is based on the following factors:
The volume and the logical block address (LBA) to which the I/O is addressed
Its direction (read or write)
The state of an internal data structure, the FlashCopy bitmap
418 Implementing the IBM System Storage SAN Volume Controller V7.4
The indirection layer allows the I/O to go through to the underlying volume, redirects the I/O
from the target volume to the source volume, or queues the I/O while it arranges for data to be
copied from the source volume to the target volume. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.
The FlashCopy bitmap dictates read and write behavior for the source and target volumes.
Source reads
Reads are performed from the source volume, which is the same for non-FlashCopy volumes.
Source writes
Writes to the source cause the following results:
If the grain was not copied to the target yet, the grain is copied before the actual write is
performed to the source. The bitmap is updated, which indicates that this grain is already
copied to the target.
If the grain was already copied, the write is performed to the source as usual.
Target reads
Reads are performed from the target if the grain was copied. Otherwise, the read is
performed from the source and no copy is performed.
Target writes
Writes to the target cause the following results:
If the grain was not copied from the source to the target, the grain is copied from the
source to the target before the actual write is performed to the source. The bitmap is
updated, which indicates that this grain is already copied to the target.
If the entire grain is being updated on the target, the target is marked as split with the
source (if no I/O error occurs during the write) and the write goes directly to the target.
If the grain in question was already copied from the source to the target, the write goes
directly to the target.
Figure 8-9 on page 420 shows how the background copy runs while I/Os are handled
according to the indirection layer algorithm.
420 Implementing the IBM System Storage SAN Volume Controller V7.4
Target 0 is not dependent on a source because it completed copying. Target 0 has two
dependent mappings (Target 1 and Target 2).
Target 1 depends on Target 0. It remains dependent until all of Target 1 is copied. Target 2
depends on it because Target 2 is 20% copy complete. After all of Target 1 is copied, it can
then move to the idle_copied state.
Target 2 is dependent upon Target 0 and Target 1 and remains dependent until all of Target 2
is copied. No target depends on Target 2; therefore, when all of the data is copied to Target 2,
it can move to the idle_copied state.
If the grain of the next oldest mapping is not yet copied, it must be copied before the write can
proceed to preserve the contents of the next oldest mapping. The data that is written to the
next oldest mapping comes from a target or source.
If the grain in the target that is being written is not yet copied, the grain is copied from the
oldest copied grain in the mappings that are newer than the target or the source if none are
copied. After this copy is done, the write can be applied to the target.
Note: The stopping copy process can be ongoing for several mappings that share the
source at the same time. At the completion of this process, the mapping automatically
makes an asynchronous state transition to the stopped state or the idle_copied state if the
mapping was in the copying state with progress = 100%.
For example, if the mapping that is associated with Target 0 was issued a stopfcmap
command or a stopfcconsistgrp command, Target 0 enters the stopping state while a
process copies the data of Target 0 to Target 1. After all of the data is copied, Target 0 enters
the stopped state and Target 1 is no longer dependent upon Target 0; however, Target 1
remains dependent on Target 2.
Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.
Yes Read from the target volume. Write to the target volume.
422 Implementing the IBM System Storage SAN Volume Controller V7.4
This copy-on-write process introduces significant latency into write operations. To isolate the
active application from this additional latency, the FlashCopy indirection layer is placed
logically between upper and lower cache. Therefore, the additional latency that is introduced
by the copy-on-write process is encountered only by the internal cache operations and not by
the application.
The logical placement of the FlashCopy indirection layer is shown in Figure 8-12.
Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume
commands to modify the size of the volume.
You can use an image mode volume as a FlashCopy source volume or target volume.
424 Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.10 FlashCopy mapping events
In this section, we describe the events that modify the states of a FlashCopy. We also
describe the mapping events that are listed in Table 8-3.
Overview of a FlashCopy sequence of events: The following tasks show the FlashCopy
sequence:
1. Associate the source data set with a target location (one or more source and target
volumes).
2. Create a FlashCopy mapping for each source volume to the corresponding target
volume. The target volume must be equal in size to the source volume.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.
Flush done The FlashCopy mapping automatically moves from the preparing state to
the prepared state after all cached data for the source is flushed and all
cached data for the target is no longer valid.
Start When all of the FlashCopy mappings in a Consistency Group are in the
prepared state, the FlashCopy mappings can be started. To preserve the
cross-volume Consistency Group, the start of all of the FlashCopy
mappings in the Consistency Group must be synchronized correctly
concerning I/Os that are directed at the volumes by using the startfcmap
or startfcconsistgrp command.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.
Copy complete After all of the source data is copied to the target and no dependent
mappings exist, the state is set to copied. If the option to automatically
delete the mapping after the background copy completes is specified, the
FlashCopy mapping is deleted automatically. If this option is not
specified, the FlashCopy mapping is not deleted automatically and it can
be reactivated by preparing and starting again.
426 Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.11 FlashCopy mapping states
In this section, we describe the states of a FlashCopy mapping.
Idle_or_copied
The source and target volumes act as independent volumes even if a mapping exists between
the two. Read and write caching is enabled for the source and the target volumes.
If the mapping is incremental and the background copy is complete, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in
the I/O Group that the mapping is assigned to is lost, the source and target volumes are
offline.
Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.
Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the SCSI
front end as a hardware error. If the mapping is incremental and a previous mapping is
completed, the mapping records the differences between the source and target volumes only.
If the connection to both nodes in the I/O Group that the mapping is assigned to is lost, the
source and target volumes go offline.
Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error. Any
changed write data for the source volume is flushed from the cache. Any read or write data for
the target volume is discarded from the cache. If the mapping is incremental and a previous
mapping is completed, the mapping records the differences between the source and target
volumes only. If the connection to both nodes in the I/O Group that the mapping is assigned to
is lost, the source and target volumes go offline.
Performing the cache flush that is required as part of the startfcmap or startfcconsistgrp
command causes I/Os to be delayed while they are waiting for the cache flush to complete. To
overcome this problem, SVC FlashCopy supports the prestartfcmap or
prestartfcconsistgrp command, which prepares for a FlashCopy start while still allowing
I/Os to continue to the source volume.
In the preparing state, the FlashCopy mapping is prepared by completing the following steps:
1. Flushing any modified write data that is associated with the source volume from the cache.
Read data for the source is left in the cache.
2. Placing the cache for the source volume into write-through mode so that subsequent
writes wait until data is written to disk before the write command that is received from the
host is complete.
3. Discarding any read or write data that is associated with the target volume from the cache.
Stopping
The mapping is copying data to another mapping.
If the background copy process is complete, the target volume is online while the stopping
copy process completes.
If the background copy process is not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs.
Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
the source and target volume to go offline. When access to the metadata is restored, the
mapping returns to the copying or stopping state and the source and target volumes return
online. The background copy process resumes. Any data that was not flushed and was
written to the source or target volume before the suspension is in cache until the mapping
leaves the suspended state.
Offline if copy
incomplete
428 Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.12 Thin-provisioned FlashCopy
FlashCopy source and target volumes can be thin-provisioned.
Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.
The benefit of the use of a FlashCopy mapping with background copy enabled is that the
target volume becomes a real clone (independent from the source volume) of the FlashCopy
mapping source volume after the copy is complete. When the background copy function is not
performed, the target volume remains a valid copy of the source data only while the
FlashCopy mapping remains in place.
Table 8-5 shows the relationship of the background copy rate value to the attempted number
of grains to be copied per second.
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
41 - 50 2 MB 8 32
51 - 60 4 MB 16 64
61 - 70 8 MB 32 128
71 - 80 16 MB 64 256
81 - 90 32 MB 128 512
The grains per second numbers represent the maximum number of grains that the SVC
copies per second, assuming that the bandwidth to the managed disks (MDisks) can
accommodate this rate.
If the SVC cannot achieve these copy rates because of insufficient bandwidth from the SVC
nodes to the MDisks, the background copy I/O contends for resources on an equal basis with
the I/O that is arriving from the hosts. Background copy I/O and I/O that is arriving from the
hosts tend to see an increase in latency and a consequential reduction in throughput.
Background copy and foreground I/O continue to make progress, and do not stop, hang, or
cause the node to fail. The background copy is performed by both nodes of the I/O Group in
which the source volume is found.
8.4.14 Synthesis
The FlashCopy functionality in the SVC creates copies of the volumes. All of the data in the
source volume is copied to the destination volume, including operating system, logical volume
manager, and application metadata.
Synthesis: Certain operating systems cannot use FlashCopy without another step, which
is called synthesis. Synthesis performs a type of transformation on the operating system
metadata that is on the target volume so that the operating system can use the disk.
430 Implementing the IBM System Storage SAN Volume Controller V7.4
However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared and the mappings are derived from a particular
source volume. The lock is used in the following modes under the following conditions:
The lock is held in shared mode during a read from the target volume, which touches a
grain that was not copied from the source.
The lock is held in exclusive mode while a grain is being copied from the source to the
target.
If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process that is wanting to use the lock in
shared or exclusive mode must wait for it to be freed.
Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O Group of the source volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source volume is a
member of the failing node’s I/O Group becomes inaccessible. FlashCopy continues with a
single copy of the FlashCopy bitmap that is stored as non-volatile in the remaining node in the
source I/O Group. The system metadata is updated to indicate that the missing node no
longer holds a current bitmap. When the failing node recovers or a replacement node is
added to the I/O Group, the bitmap redundancy is restored.
Because the storage area network (SAN) that links the SVC nodes to each other and to the
MDisks is made up of many independent links, a subset of the nodes can be temporarily
isolated from several of the MDisks. When this situation happens, the managed disks are said
to be path offline on certain nodes.
Other nodes: Other nodes might see the managed disks as online because their
connection to the managed disks is still functioning.
When an MDisk enters the path offline state on an SVC node, all of the volumes that have
extents on the MDisk also become path offline. Again, this situation happens only on the
affected nodes. When a volume is path offline on a particular SVC node, the host access to
that volume through the node fails with the SCSI check condition indicating offline.
Table 8-6 on page 433 lists the supported combinations of FlashCopy and remote copy. In the
table, remote copy refers to MM and GM.
432 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 8-6 FlashCopy and remote copy interaction
Component Remote copy primary site Remote copy secondary site
Although these presets meet most FlashCopy requirements, they do not provide support for
all possible FlashCopy options. If more specialized options are required that are not
supported by the presets, the options must be performed by using CLI commands.
In this section, we describe the three preset options and their use cases.
Snapshot
This preset creates a copy-on-write point-in-time copy. The snapshot is not intended to be an
independent copy. Instead, the copy is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.
Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate many changes to be made to the source or target volume; a
significant proportion of the volumes remains unchanged.
By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced; therefore, many snapshot copies can be used
in the environment.
Clone
The clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.
Use case
Users want a copy of the volume that they can modify without affecting the original volume.
After the clone is established, users do not expect to refresh the clone or reference the
original production data again. If the source is thin-provisioned, the target is thin-provisioned
for the auto-create target.
Backup
The backup preset creates a point-in-time replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.
Use case
The user wants to create a copy of the volume that can be used as a backup if the source
becomes unavailable, as in the loss of the underlying physical controller. The user plans to
periodically update the secondary copy and does not want the overhead of creating a copy
each time (and incremental FlashCopy times are faster than full copy, which helps to reduce
the window where the new backup is not yet fully effective). If the source is thin-provisioned,
the target is thin-provisioned on this option for the auto-create target.
Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volume’s performance.
434 Implementing the IBM System Storage SAN Volume Controller V7.4
8.5 Volume mirroring and migration options
Volume mirroring is a simple RAID 1-type function that allows a volume to remain online even
when the storage pool that backs it becomes inaccessible. Volume mirroring is designed to
protect the volume from storage infrastructure failures by seamless mirroring between
storage pools.
Volume mirroring is provided by a specific volume mirroring function in the I/O stack, and
volume mirroring cannot be manipulated like a FlashCopy or other types of copy volumes.
However, this feature provides migration functionality, which can be obtained by splitting the
mirrored copy from the source or by using the “migrate to” function. Volume mirroring cannot
control back-end storage mirroring or replication.
With volume mirroring, host I/O completes when both copies are written. Before version 6.3.0,
this feature took a copy offline when it had an I/O timeout, and then resynchronized with the
online copy after it recovered. With V6.3.0, this feature is enhanced with a tunable latency
tolerance. This tolerance provides an option to give preference to losing the redundancy
between the two copies. This tunable timeout value is Latency or Redundancy.
The Latency tuning option, which is set with Storwizetask chvdisk -mirrowritepriority
latency, is the default. This behavior was available in releases before V6.3.0. It prioritizes
host I/O latency, which yields a preference to host I/O over availability.
However, you might need to give preference to redundancy in your environment when
availability is more important than I/O response time. Use the Storwizetask chvdisk -mirror
writepriority redundancy command to set the Redundancy option.
Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.
Migration: Although these migration methods do not disrupt access, you must take a brief
outage to install the host drivers for your SVC if they are not installed. For more
information, see the IBM SVC Host Attachment User’s Guide, SC26-7905. Ensure that you
consult the revision of the document that applies to your SVC.
With volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. Using volume mirroring over volume migration is
beneficial because storage pools do not have to have the same extent size with volume
mirroring (a requirement with volume migration).
Note: Volume mirroring does not create a second volume before you split copies. Volume
mirroring adds a second copy of the data under the same volume so the result is one
volume presented to the host with two copies of data connected to this volume. Only
splitting copies creates another volume and then both volumes have only one copy of the
data.
Starting with firmware 7.3 and the introduction of the new cache architecture, mirrored
volume performance is significantly improved. Now, lower cache is beneath the volume
mirroring layer, which means that both copies have their own cache. This approach helps in
cases of copies of different types, for example, generic and compressed, because each copy
uses its independent cache and performs its own read prefetch. Destaging of the cache can
now be independent for each copy, so one copy does not affect the performance of a second
copy.
Also, because the Storwize destage algorithm is MDisk aware, it can tune or adapt the
destaging process, depending on the MDisk type and utilization, for each copy independently.
436 Implementing the IBM System Storage SAN Volume Controller V7.4
Note: Consider the following rules for creating remote partnerships between the SVC and
Storwize Family systems:
An SVC is always in the replication layer.
By default, the SVC is in the storage layer but can be changed to the replication layer.
A system can form partnerships only with systems in the same layer.
An SVC can virtualize an SVC only if the SVC is in the storage layer.
With version 6.4, an SVC in the replication layer can virtualize an SVC in the storage
layer.
In a typical Ethernet network data flow, the data transfer slows down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set of
packets that are sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 8-13.
By using the Bridgeworks SANSlide technology, this typical behavior can be eliminated with
enhanced parallelism of the data flow by using multiple virtual connections (VC) that share IP
links and addresses. The artificial intelligence engine can dynamically adjust the number of
VCs, receive window size, and packet size as appropriate to maintain optimum performance.
While the engine is waiting for one VC’s ACK, it sends more packets across other VCs. If
packets are lost from any VC, data is automatically retransmitted, as shown in Figure 8-27 on
page 456.
Figure 8-14 Optimized network data flow by using Bridgeworks SANSlide technology
With native IP partnership, the following Copy Services features are supported:
MM
Referred to as synchronous replication, MM provides a consistent copy of a source virtual
disk on a target virtual disk. Data is written to the target virtual disk synchronously after it
is written to the source virtual disk so that the copy is continuously updated.
GM and GM with Change Volumes
Referred to as asynchronous replication, GM provides a consistent copy of a source
virtual disk on a target virtual disk. Data is written to the target virtual disk asynchronously
so that the copy is continuously updated. However, the copy might not contain the last few
updates if a disaster recovery operation is performed. An added extension to GM is GM
with Change Volumes. GM with Change Volumes is the preferred method for use with
native IP replication.
438 Implementing the IBM System Storage SAN Volume Controller V7.4
Management IP and iSCSI IP on the same port can be in a different network starting with
Storwize code 7.4.
An added layer of security is provided by using Challenge Handshake Authentication
Protocol (CHAP) authentication.
TCP ports 3260 and 3265 are used for IP partnership communications; therefore, these
ports must be open in firewalls between the systems.
The following maximum throughput is restricted based on the use of 1 Gbps or 10 Gbps
ports:
– One 1 Gbps port might transfer up to 110 Mbps
– Two 1 Gbps ports might transfer up to 220 Mbps
– One 10 Gbps port might transfer up to 190 Mbps
– One 10 Gbps ports might transfer up to 280 Mbps
Note: The Bandwidth setting definition when the IP partnerships are created changed.
Previously, the bandwidth setting defaulted to 50 MBs and was the maximum transfer rate
from the primary site to the secondary site for initial sync/resyncs of volumes.
The Link Bandwidth setting is now configured by using Mbits not MBs. You set the Link
Bandwidth setting to a value that the communication link can sustain or to what is allocated
for replication. The Background Copy Rate setting is now a percentage of the Link
Bandwidth. The Background Copy Rate setting determines the available bandwidth for the
initial sync and resyncs or for GM with Change Volumes.
When the VLAN ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication on Storwize, the appropriate VLAN settings on the Ethernet network
and servers must be configured correctly in order not to experience connectivity issues. After
the VLANs are configured, changes to the VLAN settings will disrupt iSCSI and IP replication
traffic to and from Storwize.
During the VLAN configuration for each IP address, the user must be aware that if the VLAN
settings for the local and failover ports on two nodes of an I/O Group differ, switches must be
configured so that failover VLANs are configured on the local switch ports also so that the
failover of IP addresses from a failing node to a surviving node succeeds. If failover VLANs
are not configured on the local switch ports, the user loses paths to Storwize and Storwize
storage during a node failure.
Remote copy group or remote copy port group The following numbers group a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same remote copy group can
form remote copy connections with the partner system:
0: Ports that are not configured for remote copy
1: Ports that belong to remote copy port group 1
2: Ports that belong to remote copy port group 2
Each IP address can be shared for iSCSI host-attach and
remote copy functionality. Therefore, the correct settings
must be applied to each IP address.
IP partnership Two SVC systems that are partnered to perform remote copy
over native IP links.
FC partnership Two SVC systems that are partnered to perform remote copy
over native FC links.
Failover Failure of a node within an I/O Group causes all virtual disks
that are owned by this node to fail over to the surviving node.
When the configuration node of the system fails,
management IPs also fail over to an alternative node.
Failback When the failed node rejoins the system, all IP addresses that
failed over are failed back from the surviving node to the
rejoined node, and virtual disk access is restored through this
node.
IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.
440 Implementing the IBM System Storage SAN Volume Controller V7.4
State Systems Support for Comments
connected active remote
copy I/O
The following steps must be completed to establish two systems in the IP partnerships:
1. The administrator configures the CHAP secret on both the systems. This step is not
mandatory and users can choose to not configure the CHAP secret.
2. If required, the administrator configures the system IP addresses on both local and remote
systems so that they can discover each other over the network.
3. If you want to use VLANs, configure your LAN switches and Storwize Ethernet ports to use
VLAN tagging.
4. The administrator configures the SVC ports on each node in both of the systems by using
the Storwizetask cfgportip command and completes the following steps:
a. Configure the IP addresses for remote copy data.
b. Add the IP addresses in the respective remote copy port group.
c. Define whether the host access on these ports over iSCSI is allowed.
5. The administrator establishes the partnership with the remote system from the local
system where the partnership state then transitions to the Partially_Configured_Local
state.
6. The administrator establishes the partnership from the remote system with the local
system, and if successful, the partnership state then transitions to the Fully_Configured
state, which implies that the partnerships over the IP network were successfully
established. The partnership state momentarily remains in the not_present state before
transitioning to the fully_configured state.
7. The administrator creates MM, GM, and GM with Change Volume relationships.
The SVC IP addresses that are connected to the same physical link are designated with
identical remote copy port groups. The SVC supports three remote copy groups: 0, 1, and 2.
The SVC IP addresses are, by default, in remote copy port group 0. Ports in port group 0 are
not considered for creating remote copy data paths between two systems. For partnerships to
You can assign one IPv4 address and one IPv6 address to each Ethernet port on the SVC
platforms. Each of these IP addresses can be shared between iSCSI host attach and the IP
partnership. The user must configure the required IP address (IPv4 or IPv6) on an Ethernet
port with a remote copy port group. The administrator might want to use IPv6 addresses for
remote copy operations and use IPv4 addresses on that same port for iSCSI host attach. This
configuration also implies that for two systems to establish an IP partnership, both systems
must have IPv6 addresses that are configured.
Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address and any other IP address that is
configured on that Ethernet port.
Note: To establish an IP partnership, each SVC node must have only a single remote copy
port group that is configured, that is, 1 or 2. The remaining IP addresses must be in remote
copy port group 0.
Figure 8-15 Single link with only one remote copy port group that is configured in each system
442 Implementing the IBM System Storage SAN Volume Controller V7.4
As shown in Figure 8-15 on page 442, two systems exist: System A and System B. A
single remote copy port group 1 is created on Node A1 on System A and on Node B2 on
System B because only a single inter-site link exists to facilitate the IP partnership traffic.
(The administrator might choose to configure the remote copy port group on Node B1 on
System B instead of Node B2.) At any time, only the IP addresses that are configured in
remote copy port group 1 on the nodes in System A and System B participate in
establishing data paths between the two systems after the IP partnerships are created. In
this configuration, no failover ports are configured on the partner node in the same I/O
Group.
This configuration has the following characteristics:
– Only one node in each system has a configured remote copy port group, and no
failover ports are configured.
– If Node A1 in System A or Node B2 in System B failed, the IP partnership stops and
enters the not_present state until the failed nodes recover.
– After the nodes recover, the IP ports fail back, the IP partnership recovers, and the
partnership state changes to the fully_configured state.
– If the inter-site system link fails, the IP partnerships transition to the not_present state.
– This configuration is not recommended because it is not resilient to node failures.
Two 2-node systems are in an IP partnership over a single inter-site link (with configured
failover ports), as shown in Figure 8-16 (configuration 2).
Figure 8-16 Only one remote copy group on each system and nodes with failover ports configured
As shown in Figure 8-16, two systems exist: System A and System B. A single remote
copy port group 1 is configured on two Ethernet ports, one each, on Node A1 and Node
A2 on System A and similarly, on Node B1 and Node B2 on System B. Although two ports
on each system are configured for remote copy port group 1, only one Ethernet port in
each system actively participates in the IP partnership process. This selection is
determined by a path configuration algorithm that is designed to choose data paths
between the two systems to optimize performance.
Figure 8-17 Multinode systems single inter-site link with only one remote copy port group
444 Implementing the IBM System Storage SAN Volume Controller V7.4
As shown in Figure 8-17 on page 444, there are two 4-node systems: System A and
System B. A single remote copy port group 1 is configured on nodes A1, A2, A3, and A4
on System A at Site A, and on nodes B1, B2, B3, and B4 on System B at Site B. Although
four ports are configured for remote copy group 1, only one Ethernet port in each remote
copy port group on each system actively participates in the IP partnership process. Port
selection is determined by a path configuration algorithm. The other ports play the role of
standby ports.
If Node A1 fails in System A, the IP partnership selects one of the remaining ports that is
configured with remote copy port group 1 from any of the nodes from either of the two I/O
Groups in System A. However, it might take time (generally tens of seconds) for discovery
and path configuration logic to re-establish the paths after the failover and this process can
cause partnerships to transition to the not_present state. This result leads remote copy
relationships to stop and the administrator might need to manually verify the issues in the
event log and start the relationships or remote copy Consistency Groups, if they do not
autorecover. The details about the particular IP port that is actively participating in the IP
partnership process are provided in the Storwizeinfo lsportip view (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both I/O Groups.
However, only one port in that remote copy port group remains active and participates
in the IP partnership on each system.
– If Node A1 in System A or Node B2 in System B encountered a failure in the system,
the discovery of the IP partnerships is triggered and it continues servicing the I/O from
the failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
An eight-node system is in an IP partnership with a four-node system over a single
inter-site link, as shown in Figure 8-18 on page 446 (configuration 4).
446 Implementing the IBM System Storage SAN Volume Controller V7.4
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both the I/O Groups
that are identified for participating in IP replication. However, only one port in that
remote copy port group remains active on each system and participates in IP
replication.
– If Node A1 in System A or Node B2 in System B fails in the system, the IP partnerships
trigger discovery and continue servicing the I/O from the failover ports.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
Two 2-node systems exist with two inter-site links, as shown in Figure 8-19 (configuration
5).
Figure 8-19 Dual links with two remote copy groups on each system are configured
As shown in Figure 8-19, remote copy port groups 1 and 2 are configured on the nodes in
System A and System B because two inter-site links are available. In this configuration,
the failover ports are not configured on partner nodes in the I/O Group. Instead, the ports
are maintained in different remote copy port groups on both of the nodes and they remain
active and participate in the IP partnership by using both of the links.
However, if either of the nodes in the I/O Group fails (that is, if Node A1 on System A fails),
the IP partnership continues only from the available IP port that is configured in remote
copy port group 2. Therefore, the effective bandwidth of the two links is reduced to 50%.
Only the bandwidth of a single link is available until the failure is resolved.
This configuration has the following characteristics:
– Two inter-site links exist and two remote copy port groups are configured.
– Each node has only one IP port in remote copy port group 1 or 2.
– Both the IP ports in the two remote copy port groups participate simultaneously in the
IP partnerships. Therefore, both of the links are used.
Figure 8-20 Multinode systems with dual inter-site links between the two systems
As shown in Figure 8-20, there are two 4-node systems: System A and System B. This
configuration is an extension of configuration 5 to a multinode multi-I/O Group
environment. As seen in this configuration, two I/O Groups exist and each node in the I/O
Group has a single port that is configured in remote copy port group 1 or 2. Although two
ports are configured in remote copy port groups 1 and 2 on each system, only one IP port
in each remote copy port group on each system actively participates in the IP partnership.
The other ports that are configured in the same remote copy port group act as standby
ports in a failure. Which port in a configured remote copy port group participates in the IP
partnership at any moment is determined by a path configuration algorithm.
448 Implementing the IBM System Storage SAN Volume Controller V7.4
In this configuration, if Node A1 fails in System A, the IP partnership traffic continues from
Node A2 (that is, remote copy port group 2) and at the same time the failover also causes
discovery in remote copy port group 1. Therefore, the IP partnership traffic continues from
Node A3 on which remote copy port group 1 is configured. The details of the particular IP
port that is actively participating in the IP partnership process is provided in the
Storwizeinfo lsportip output (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in the I/O Group 1 or 2.
However, only one port per system in both remote copy port groups remains active and
participates in the IP partnership.
– Only a single port per system from each configured remote copy port group
participates simultaneously in the IP partnership. Therefore, both of the links are used.
– During node failure or port failure of a node that is actively participating in the IP
partnership, the IP partnership continues from the alternative port because another
port is in the system in the same remote copy port group but in a different I/O Group.
– The pathing algorithm can start the discovery of an available port in the affected
remote copy port group in the second I/O Group and pathing is re-established, which
restores the total bandwidth, that is, both of the links are available to support the IP
partnership.
An eight-node system is in an IP partnership with a four-node system over dual inter-site
links, as shown in Figure 8-21 on page 450 (configuration 7).
450 Implementing the IBM System Storage SAN Volume Controller V7.4
If Node A1 fails in System A, IP partnership traffic continues from Node A2 (that is, remote
copy port group 2) and the failover also causes IP partnership traffic to continue from
Node A5 on which remote copy port group 1 is configured. The details of the particular IP
port actively participating in the IP partnership process are provided in the Storwizeinfo
lsportip output (reported as used).
This configuration has the following characteristics:
– Two I/O Groups exist with nodes in those I/O Groups that are configured in two remote
copy port groups because two inter-site links are available for participating in the IP
partnership. However, only one port per system in a particular remote copy port group
remains active and participates in the IP partnership.
– One port per system from each remote copy port group participates in the IP
partnership simultaneously. Therefore, both of the links are used.
– If a node or a port on the node that is actively participating in the IP partnership fails,
the remote copy data path is established from that port because another port is
available on an alternative node in the system with the same remote copy port group.
– The path selection algorithm starts discovery of the available port in the affected
remote copy port group in the alternative I/O Groups and paths are re-established,
restoring the total bandwidth across both links.
– The remaining or all of the I/O Groups can be in remote copy partnerships with other
systems.
An example of unsupported configuration for single inter-site link is shown in Figure 8-22
(configuration 8).
Figure 8-22 Two node systems with single inter-site link and remote copy port groups are
configured
Figure 8-23 Dual links with two remote copy port groups with failover port groups are configured
452 Implementing the IBM System Storage SAN Volume Controller V7.4
In this configuration, one port on each node in System A and System B is configured in
remote copy group 1 to establish an IP partnership and to support remote copy
relationships. A dedicated inter-site link is used for IP partnership traffic and iSCSI host
attach is disabled on those ports.
The following configuration steps are used:
a. Configure system IP addresses correctly so that they can be reached over the inter-site
link.
b. Qualify whether the partnerships must be created over IPv4 or IPv6 and then assign IP
addresses and open firewall ports 3260 and 3265.
c. Configure the IP ports for remote copy on both systems by using the following settings:
• Remote copy group: 1
• Host: No
• Assign IP address
d. Check that the maximum transmission unit (MTU) levels across the network meet the
requirements as set. (The default MTU is 1500 on the SVC.)
e. Establish the IP partnerships from both of the systems.
f. After the partnerships are in the fully_configured state, you can create the remote copy
relationships.
An example deployment for configuration 5 with ports that are shared with host access is
shown in Figure 8-25 (configuration 11).
In this configuration, IP ports are shared by both iSCSI hosts and the IP partnership.
The following configuration steps are used:
a. Configure the system IP addresses correctly so that they can be reached over the
inter-site link.
b. Qualify whether the IP partnerships must be created over IPv4 or IPv6 and then assign
IP addresses and open firewall ports 3260 and 3265.
The SVC provides a single point of control when remote copy is enabled in your network
(regardless of the disk subsystems that are used) if those disk subsystems are supported by
the SVC.
The general application of SVC Remote Copy services is to maintain two real-time
synchronized copies of a disk. Often, two copies are geographically dispersed between two
SVC systems, although it is possible to use MM or GM within a single system (within an I/O
Group). If the master copy fails, you can enable an auxiliary copy for I/O operation.
454 Implementing the IBM System Storage SAN Volume Controller V7.4
Tips: Intracluster MM/GM uses more resources within the system when compared to an
intercluster MM/GM relationship where resource allocation is shared between the systems.
Licensing must also be doubled because the source and the target are within the same
system.
Use intercluster MM/GM when possible. For mirroring volumes in the same IO Group, it is
better to use Volume Mirroring or the FlashCopy feature.
A typical application of this function is to set up a dual-site solution that uses two SVC
systems. The first site is considered the primary or production site, and the second site is
considered the backup site or failover site, which is activated when a failure at the first site is
detected.
Note: For more information about restrictions and limitations of native IP replication, see
8.6.2, “IP partnership limitations” on page 438.
Object name length: SVC 6.1 supports object names up to 63 characters. Previous levels
supported object names of up to 15 characters only.
When SVC 6.1 systems are partnered with 4.3.1 and 5.1.0 systems, various object names
are truncated at 15 characters when they are displayed from 4.3.1 and 5.1.0 systems.
Figure 8-27 shows four systems in a star topology, with System A at the center. System A can
be a central DR site for the three other locations.
By using a star topology, you can migrate applications by using a process, such as the
process that is described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).
456 Implementing the IBM System Storage SAN Volume Controller V7.4
4. Synchronize to system C, and ensure that A → C is established:
– A → B, A → C, A → D, B → C, B → D, and C → D
– A → B, A → C, and B → C
Figure 8-29 shows an example of an SVC fully connected topology, for example: A → B, A →
C, A → D, B → D, and C → D.
Figure 8-29 is a fully connected mesh in which every system has a partnership to each of the
three other systems. This topology allows volumes to be replicated between any pair of
systems, for example: A → B, A → C, and B → C.
Although systems can have up to three partnerships, volumes can be part of only one remote
copy relationship, for example, A → B.
System partnership intermix: All of the preceding topologies are valid for the intermix of
an SVC with another SVC if the SVC is set to the replication layer and running 6.3.0 code
or later.
An application that performs a high volume of database updates is designed with the concept
of dependent writes. With dependent writes, it is important to ensure that an earlier write
completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the application’s algorithms and can
lead to problems, such as detected or undetected data corruption.
The SVC MM and GM implementation operates in a manner that is designed to always keep
a consistent image at the secondary site. The SVC GM implementation uses complex
algorithms that operate to identify sets of data and number those sets of data in sequence.
The data is then applied at the secondary site in the defined sequence.
For more information about dependent writes, see 8.4.3, “Consistency Groups” on page 416.
Figure 8-31 on page 459 shows the concept of MM Consistency Groups. The same concept
applies to GM Consistency Groups.
458 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-31 MM Consistency Group
Because MM_Relationship 1 and 2 are part of the Consistency Group, they can be handled
as one entity. The stand-alone MM_Relationship 3 is handled separately.
Certain uses of MM/GM require the manipulation of more than one relationship. Remote
Copy Consistency Groups can group relationships so that they are manipulated in unison.
Although Consistency Groups can be used to manipulate sets of relationships that do not
need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a Consistency Group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
Consistency Group.
For example, consider the case of two applications that are independent, yet they are placed
into a single Consistency Group. If an error occurs, synchronization is lost and a background
copy process is required to recover synchronization. While this process is progressing,
MM/GM rejects attempts to enable access to the auxiliary volumes of either application.
Stand-alone relationships and Consistency Groups share a common configuration and state
model. All of the relationships in a non-empty Consistency Group have the same state as the
Consistency Group.
Zoning
The SVC node ports on each SVC system must communicate with each other to create the
partnership. Switch zoning is critical to facilitating intercluster communication.
These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and they are repaired to maintain operation where possible. If communication
between the SVC systems is interrupted or lost, an event is logged (and the MM and GM
relationships stop).
Alerts: You can configure the SVC to raise Simple Network Management Protocol (SNMP)
traps to the enterprise monitoring system to alert on events that indicate that an
interruption in internode communication occurred.
Intercluster links
All SVC nodes maintain a database of other devices that are visible on the fabric. This
database is updated as devices appear and disappear.
Devices that advertise themselves as SVC nodes are categorized according to the SVC
system to which they belong. The SVC nodes that belong to the same system establish
communication channels between themselves and begin to exchange messages to
implement clustering and the functional protocols of the SVC.
Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they are configured together to perform a remote copy relationship.
The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
460 Implementing the IBM System Storage SAN Volume Controller V7.4
If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.
Note: You can use chsystem with -partnerfcportmask to dedicate several Storwize FC
ports only to system-to-system traffic to ensure that remote copy is not affected by other
traffic, such as host-to-node traffic or node-to-node traffic within the same system.
Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your MM auxiliary
location.
Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups and GM Consistency Groups (FlashCopy
Consistency Groups and GM Consistency Groups are described in 8.4, “Implementing the
SAN Volume Controller FlashCopy” on page 414).
Two SVC systems must be defined in an SVC partnership, which must be performed on both
SVC systems to establish a fully functional MM partnership.
Limit: When a local fabric and a remote fabric are connected for MM purposes, the
inter-switch link (ISL) hop count between a local node and a remote node cannot exceed
seven.
Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, MM suspends writes to the
auxiliary volume and allows I/O to the master volume to continue to avoid affecting the
operation of the master volumes.
Figure 8-32 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.
However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, the use of traditional FC MM has distance limitations that
are based on your performance requirements. The SVC does not support more than 300 km
(186.4 miles).
462 Implementing the IBM System Storage SAN Volume Controller V7.4
8.7.6 Metro Mirror features
SVC MM supports the following features:
Synchronous remote copy of volumes that are dispersed over metropolitan distances.
The SVC implements MM relationships between volume pairs, with each volume in a pair
that is managed by an SVC system (requires code version 6.3.0 or later).
The SVC supports intracluster MM where both volumes belong to the same system (and
I/O Group).
The SVC supports intercluster MM where each volume belongs to a separate SVC
system. You can configure a specific SVC system for partnership with another system. All
intercluster MM processing occurs between two SVC systems that are configured in a
partnership.
Intercluster and intracluster MM can be used concurrently.
The SVC does not require that a control network or fabric is installed to manage MM. For
intercluster MM, the SVC maintains a control link between two systems. This control link is
used to control the state and coordinate updates at either end. The control link is
implemented on top of the same FC fabric connection that the SVC uses for MM I/O.
The SVC implements a configuration model that maintains the MM configuration and state
through major events, such as failover, recovery, and resynchronization, to minimize user
configuration action through these events.
The SVC allows the resynchronization of changed data so that write failures that occur on the
master or auxiliary volumes do not require a complete resynchronization of the relationship.
Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups.
GM writes data to the auxiliary volume asynchronously, which means that host writes to the
master volume provide the host with confirmation that the write is complete before the I/O
completes on the auxiliary volume.
Limit: When a local fabric and a remote fabric are connected for GM purposes, the ISL
hop count between a local node and a remote node must not exceed seven hops.
The GM function provides the same function as MM remote copy, but over long-distance links
with higher latency without requiring the hosts to wait for the full round-trip delay of the
long-distance link.
Figure 8-33 on page 465 shows that a write operation to the master volume is acknowledged
back to the host that is issuing the write before the write operation is mirrored to the cache for
the auxiliary volume.
464 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-33 GM write sequence
The GM algorithms maintain a consistent image on the auxiliary always. They achieve this
consistent image by identifying sets of I/Os that are active concurrently at the master,
assigning an order to those sets, and applying those sets of I/Os in the assigned order at the
secondary. As a result, GM maintains the features of Write Ordering and Read Stability.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system; therefore, the process is not
subject to the latency of the long-distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.
In SVC code 7.2, these algorithms are enhanced to optimize GM behavior and latency even
further. GM write I/O from the production SVC system to a secondary SVC system requires
serialization and sequence-tagging before being sent across the network to the remote site
(to maintain a write-order consistent copy of data). Sequence-tagged GM writes on the
secondary system are processed without the parallelism and management of write I/O
sequencing that imposes more latency on write I/Os in code versions before 7.2. As a result,
high-bandwidth GM throughput environments can experience performance impacts on the
primary system during high I/O peak periods.
Starting with code V7.2, Storwize allows more parallelism in processing and managing GM
writes on a secondary system by using the following methods:
Nodes on the secondary system store replication writes in new redundant non-volatile
cache
Cache content details are shared between nodes
Cache content details are batched together to make node-to-node latency less of an issue
Nodes intelligently apply these batches in parallel as soon as possible
Nodes internally manage and optimize GM secondary write I/O processing
In a failover scenario where the secondary site must become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them, for example, a transaction log replay.
GM is supported over FC, FC over IP (FCIP), FC over Ethernet (FCOE), and native IP
connections. The maximum distance cannot exceed 80 ms round trip, which is about
4,000 km (2,485.48 miles) between mirrored systems. But, starting with Storwize code V7.4,
this distance was significantly increased for certain SVC Gen2 and SVC configurations.
Figure 8-34 shows the current supported distances for GM remote copy.
466 Implementing the IBM System Storage SAN Volume Controller V7.4
The SVC implements flexible resynchronization support, enabling it to resynchronize
volume pairs that experienced write I/Os to both disks and to resynchronize only those
regions that changed.
An optional feature for GM permits a delay simulation to be applied on writes that are sent
to auxiliary volumes. The delay simulation is useful in intracluster scenarios for testing
purposes.
As of SVC 6.3.0 and later, GM source and target volumes can be associated with Change
Volumes.
Colliding writes
Before V4.3.1, the GM algorithm required that only a single write is active on any 512-byte
logical block address (LBA) of a volume. If a further write is received from a host while the
auxiliary write is still active (even though the master write might complete), the new host write
is delayed until the auxiliary write is complete. This restriction is needed if a series of writes to
the auxiliary must be retried (which is called reconstruction). Conceptually, the data for
reconstruction comes from the master volume.
If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent.
Applications that deliver such write activity do not achieve the performance that GM is
intended to support. A volume statistic is maintained about the frequency of these collisions.
An attempt is made to allow multiple writes to a single location to be outstanding in the GM
algorithm. Master writes still need to be serialized, and the intermediate states of the master
data must be kept in a non-volatile journal while the writes are outstanding to maintain the
correct write ordering during reconstruction. Reconstruction must never overwrite data on the
auxiliary with an earlier version. The volume statistic that is monitoring colliding writes is now
limited to those writes that are not affected by this change.
Delay simulation
An optional feature for GM permits a delay simulation to be applied on writes that are sent to
auxiliary volumes. This feature allows you to test to detect colliding writes. Therefore, you can
use this feature to test an application before the full deployment of the feature. The feature
can be enabled separately for each intracluster or intercluster GM. You specify the delay
setting by using the chsystem command and view the delay by using the lssystem command.
The gm_intra_cluster_delay_simulation field expresses the amount of time that intracluster
auxiliary I/Os are delayed. The gm_inter_cluster_delay_simulation field expresses the
amount of time that intercluster auxiliary I/Os are delayed. A value of zero disables the
feature.
Tip: If you are experiencing repeated problems with the delay on your link, ensure that the
delay simulator was correctly disabled.
GM has functionality that is designed to address the following conditions, which might
negatively affect certain GM implementations:
The estimation of the bandwidth requirements tends to be complex.
Ensuring that the latency and bandwidth requirements can be met is often difficult.
Congested hosts on the source or target site can cause disruption.
Congested network links can cause disruption with only intermittent peaks.
To address these issues, Change Volumes were added as an option for GM relationships.
Change Volumes use the FlashCopy functionality, but they cannot be manipulated as
FlashCopy volumes because they are for a special purpose only. Change Volumes replicate
point-in-time images on a cycling period. The default is 300 seconds. Your change rate needs
to include only the condition of the data at the point in time that the image was taken, instead
of all the updates during the period. The use of this function can provide significant reductions
in replication volume.
468 Implementing the IBM System Storage SAN Volume Controller V7.4
GM with Change Volumes has the following characteristics:
Larger RPO
Point-in-time copies
Asynchronous
Possible system performance overhead because point-in-time copies are created locally
With GM with Change Volumes, this environment looks as shown in Figure 8-37.
With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary GM volume at the
target site, which is then captured in another Change Volume on the target site. This
approach provides an always consistent image at the target site and protects your data from
being inconsistent during resynchronization.
How Change Volumes might save you replication traffic is shown in Figure 8-38 on page 470.
In Figure 8-38, you can see a number of I/Os on the source and the same number on the
target, and in the same order. Assuming that this data is the same set of data being updated
repeatedly, this approach results in wasted network traffic. The I/O can be completed much
more efficiently, as shown in Figure 8-39.
In Figure 8-39, the same data is being updated repeatedly; therefore, Change Volumes
demonstrate significant I/O transmission savings by needing to send I/O number 16 only,
which was the last I/O before the cycling period.
You can adjust the cycling period by using the chrcrelationship -cycleperiodseconds
<60 - 86400> command from the CLI. If a copy does not complete in the cycle period, the
next cycle does not start until the prior cycle completes. For this reason, the use of Change
Volumes offers the following possibilities for RPO:
If your replication completes in the cycling period, your RPO is twice the cycling period.
If your replication does not complete within the cycling period, your RPO is twice the
completion time. The next cycling period starts immediately after the prior cycling period is
finished.
Carefully consider your business requirements versus the performance of GM with Change
Volumes. GM with Change Volumes increases the intercluster traffic for more frequent cycling
periods. Therefore, selecting the shortest cycle periods possible is not always the answer. In
most cases, the default must meet requirements and perform well.
Important: When you create your GM volumes with Change Volumes, ensure that you
remember to select the Change Volume on the auxiliary (target) site. Failure to do so
leaves you exposed during a resynchronization operation.
470 Implementing the IBM System Storage SAN Volume Controller V7.4
8.7.12 Distribution of work among nodes
For the best performance, MM/GM volumes must have their preferred nodes evenly
distributed among the nodes of the systems. Each volume within an I/O Group has a
preferred node property that can be used to balance the I/O load between nodes in that
group. MM/GM also uses this property to route I/O between systems.
If this preferred practice is not maintained, for example, source volumes are assigned to only
one node in the IO group, you can change the preferred node for each volume to distribute
volumes evenly between the nodes. Starting with firmware V7.3, the preferred node can be
changed without changing the IO group, which will not affect the host IO to a particular
volume. Additionally, now you can also change the preferred Storwize node for volumes that
are in a remote copy relationship. The remote copy relationship type does not matter. (The
remote copy relationship type can be MM, GM, or GM with Change Volumes.) You can
change the preferred node both to the source and target volumes that are participating in the
remote copy relationship.
Background copy I/O is scheduled to avoid bursts of activity that might adversely affect
system behavior. An entire grain of tracks on one volume is processed at around the same
time but not as a single I/O. Double buffering is used to try to use sequential performance
within a grain. However, the next grain within the volume might not be scheduled for a while.
Multiple grains might be copied simultaneously and might be enough to satisfy the requested
rate, unless the available resources cannot sustain the requested rate.
GM paces the rate at which background copy is performed by the appropriate relationships.
Background copy occurs on relationships that are in the InconsistentCopying state with a
status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly among all
of the nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node, in turn, divides its allocation evenly between the multiple relationships that are
performing a background copy.
Important: The background copy value is a system-wide parameter that can be changed
dynamically but only on a system basis and not on a relationship basis. Therefore, the copy
rate of all relationships changes when this value is increased or decreased. In systems
with many remote copy relationships, increasing this value might affect overall system or
intercluster link performance. The background copy rate can be changed between
1 - 1000 MBps.
With this technique, do not allow I/O on the master or auxiliary before the relationship is
established.
Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not, therefore, creating a data loss or data integrity
exposure for hosts that access data on the auxiliary volume.
472 Implementing the IBM System Storage SAN Volume Controller V7.4
Switching copy direction: The copy direction for an MM relationship can be switched so
that the auxiliary volume becomes the master, and the master volume becomes the
auxiliary, which is similar to the FlashCopy restore option. However, although the
FlashCopy target volume can operate in read/write mode, the target volume of the started
remote copy is always in read-only mode.
While the MM relationship is active, the auxiliary volume is not accessible for host application
write I/O at any time. The SVC allows read-only access to the auxiliary volume when it
contains a consistent image. Storwize allows boot time operating system discovery to
complete without error, so that any hosts at the secondary site can be ready to start the
applications with minimum delay, if required.
For example, many operating systems must read LBA 0 to configure a logical unit. Although
read access is allowed at the auxiliary in practice, the data on the auxiliary volumes cannot be
read by a host because most operating systems write a “dirty bit” to the file system when it is
mounted. Because this write operation is not allowed on the auxiliary volume, the volume
cannot be mounted.
This access is provided only where consistency can be ensured. However, coherency cannot
be maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.
To enable access to the auxiliary volume for host operations, you must stop the MM
relationship by specifying the -access parameter. While access to the auxiliary volume for
host operations is enabled, the host must be instructed to mount the volume before the
application can be started, or instructed to perform a recovery process.
For example, the MM requirement to enable the auxiliary copy for access differentiates it from
third-party mirroring software on the host, which aims to emulate a single, reliable disk
regardless of what system is accessing it. MM retains the property that two volumes exist, but
it suppresses one volume while the copy is being maintained.
The use of an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.
The failover process can be automated through failover management software. The SVC
provides SNMP traps and programming (or scripting) for the CLI to enable this automation.
Number of MM or GM 256
Consistency Groups per system
Number of MM or GM 8,192
relationships per system
Number of MM or GM 8,192
relationships per Consistency
Group
Total volume size per I/O Group A per I/O Group limit of 1,024 TB exists on the quantity of master
and auxiliary volume address spaces that can participate in MM
and GM relationships. This maximum configuration uses all 512
MB of bitmap space for the I/O Group, and it allows 10 MB of
space for all remaining copy services features.
In Figure 8-40 on page 475, the MM/GM relationship state diagram shows an overview of the
states that can apply to an MM/GM relationship in a connected state.
474 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-40 MM or GM mapping state diagram
When the MM/GM relationship is created, you can specify whether the auxiliary volume is
already in sync with the master volume, and the background copy process is then skipped.
This capability is useful when MM/GM relationships are created for volumes that were created
with the format option.
Stop or error
When an MM/GM relationship is stopped (intentionally or because of an error), the state
changes. For example, the MM/GM relationships in the ConsistentSynchronized state enter
the ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.
If the connection is broken between the SVC systems that are in a partnership, all
(intercluster) MM/GM relationships enter a Disconnected state. For more information, see
“Connected versus disconnected” on page 476.
State overview
in the following sections, we provide an overview of the various MM/GM states.
When the two systems can communicate, the systems and the relationships that span them
are described as connected. When they cannot communicate, the systems and the
relationships that span them are described as disconnected.
476 Implementing the IBM System Storage SAN Volume Controller V7.4
In this state, both systems are left with fragmented relationships and are limited as far as the
configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and the configuration commands that are permitted.
When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments, considering any configuration or other event
that occurred while the relationship was disconnected. As a result, the relationship can return
to the state that it was in when the relationship became disconnected or enter a new state.
Relationships that are configured between volumes in the same SVC system (intracluster)
are never described as being in a disconnected state.
An auxiliary volume is described as consistent if it contains data that might be read by a host
system from the master if power failed at an imaginary point while I/O was in progress, and
power was later restored. This imaginary point is defined as the recovery point. The
requirements for consistency are expressed regarding activity at the master up to the
recovery point.
The auxiliary volume contains the data from all of the writes to the master for which the host
received successful completion and that data was not overwritten by a subsequent write
(before the recovery point).
For writes for which the host did not receive a successful completion (that is, it received a bad
completion or no completion at all), and the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from the
master.
From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred).
For more information about dependent writes, see 8.4.3, “Consistency Groups” on page 416.
When you are deciding how to use Consistency Groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.
If two programs or systems communicate and store details as a result of the information that
is exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
Consistency Group.
The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.
When communication is lost for an extended period, MM/GM tracks the changes that
occurred on the master, but not the order or the details of the changes (write data). When
communication is restored, it is impossible to synchronize the auxiliary without sending write
data to the auxiliary out of order and, therefore, losing consistency.
Detailed states
In the following sections, we describe the states that are portrayed to the user for either
Consistency Groups or relationships. We also describe information that is available in each
state. The major states are designed to provide guidance about the available configuration
commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent.
478 Implementing the IBM System Storage SAN Volume Controller V7.4
This state is entered when the relationship or Consistency Group was InconsistentCopying
and experienced a persistent error or received a stop command that caused the copy process
to stop.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O.
In this state, a background copy process runs that copies data from the master to the auxiliary
volume.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or stop command places the relationship or Consistency Group into an
InconsistentStopped state. A start command is accepted, but it has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master.
This state can arise when a relationship was in a ConsistentSynchronized state and
experiences an error that forces a Consistency Freeze. It can also arise when a relationship is
created with a CreateConsistentFlag set to TRUE.
Normally, write activity that follows an I/O error causes updates to the master and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must use a start command with the -force option to
acknowledge this condition, and the relationship or Consistency Group transitions to
InconsistentCopying. Enter this command only after all outstanding events are repaired.
In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can enter a switch command that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O, and the auxiliary volume is accessible for read-only I/O.
Writes that are sent to the master volume are also sent to the auxiliary volume. Either
successful completion must be received for both writes, the write must be failed to the host, or
a state must transition out of the ConsistentSynchronized state before a write is completed to
the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
If the relationship or Consistency Group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Therefore, both master and auxiliary volumes are accessible for write I/O.
In this state, the relationship or Consistency Group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine the areas that must be copied after a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship received write I/O, which is indicated by
the Synchronized status. If the start command leads to loss of consistency, you must specify
the -force parameter.
Also, the relationship or Consistency Group accepts a -clean option on the start command
while in this state. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The target volumes in this half of the relationship
or Consistency Group are all in the master role and accept read or write I/O.
The priority in this state is to recover the link to restore the relationship or consistency.
480 Implementing the IBM System Storage SAN Volume Controller V7.4
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transitions to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or Consistency Group, which depends on the following factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when it is reconnected.
While IdlingDisconnected, if a write I/O is received that causes the loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event also is raised when this condition occurs for the
ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and do not accept read or write
I/O.
Except for deletes, no configuration activity is permitted until the relationship becomes
connected again.
When the relationship or Consistency Group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions are true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write
I/O.
In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point when Consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or Consistency Group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
system.
A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.
When the relationship or Consistency Group becomes connected again, the relationship or
Consistency Group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the master while disconnected.
Empty
This state applies only to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show.
It is entered when a Consistency Group is first created. It is exited when the first relationship
is added to the Consistency Group, at which point the state of the relationship becomes the
state of the Consistency Group.
The remote host server is mapped to the auxiliary volume and the disk is available for I/O.
For more information about MM/GM commands, see the IBM System Storage SAN Volume
Controller and IBM Storwize V7000 Command-Line Interface User’s Guide, GC27-2287.
The command set for MM/GM contains the following broad groups:
Commands to create, delete, and manipulate relationships and Consistency Groups
Commands to cause state changes
If a configuration command affects more than one system, MM/GM performs the work to
coordinate configuration activity between the systems. Certain configuration commands can
be performed only when the systems are connected and fail with no effect when they are
disconnected.
Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by MM/GM when the systems become connected again.
482 Implementing the IBM System Storage SAN Volume Controller V7.4
For any command (with one exception), a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case the
system that is receiving the command is called the local system.
The exception is the command that sets systems into an MM/GM partnership. The
mkfcpartnership and mkippartnership command must be issued on the local and remote
systems.
The commands in this section are described as an abstract command set and are
implemented by either of the following methods:
The CLI can be used for scripting and automation.
The GUI can be used for one-off tasks.
Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure that
connects the remote sites, regardless of the compression rates that you might achieve.
-gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping GM relationships. Specify values 60 - 86400 seconds in increments of 10
seconds. The default value is 300. Do not change this value except under the direction of
IBM Support.
Use the Storwizetask chcluster command to adjust these values, as shown in the following
example:
Storwizetask chcluster -gmlinktolerance 300
You can view all of these parameter values by using the Storwizeinfo lscluster
<clustername> command.
gmlinktolerance
We focus on the gmlinktolerance parameter in particular. If poor response extends past the
specified tolerance, a 1920 event is logged, and one or more GM relationships automatically
stop to protect the application hosts at the primary site. During normal operations, application
hosts experience a minimal effect from the response times because the GM feature uses
asynchronous replication.
However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations begin to queue at the primary system. This queue
results in an extended response time to application hosts. In this situation, the
gmlinktolerance feature stops GM relationships and the application host’s response time
returns to normal. After a 1920 event occurs, the GM auxiliary volumes are no longer in the
consistent_synchronized state until you fix the cause of the event and restart your GM
relationships. For this reason, ensure that you monitor the system to track when these 1920
events occur.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
During SAN maintenance windows in which degraded performance is expected from SAN
components and application hosts can withstand extended response times from GM
volumes.
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.
484 Implementing the IBM System Storage SAN Volume Controller V7.4
A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).
If 1920 events are occurring, it can be necessary to use a performance monitoring and
analysis tool, such as the IBM Tivoli Storage Productivity Center, to help identify and resolve
the problem.
To establish a fully functional MM/GM partnership, you must issue this command on both
systems. This step is a prerequisite for creating MM/GM relationships between volumes on
the SVC systems.
When the partnership is created, you can specify the bandwidth to be used by the
background copy process between the local SVC system and the remote SVC system. If it is
not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that
is less than or equal to the bandwidth that can be sustained by the intercluster link.
To set the background copy bandwidth optimally, ensure that you consider all three resources:
primary storage, intercluster link bandwidth, and auxiliary storage. Provision the most
restrictive of these three resources between the background copy bandwidth and the peak
foreground I/O workload. Perform this provisioning by calculation or by determining
experimentally how much background copy can be allowed before the foreground I/O latency
becomes unacceptable. Then, reduce the background copy to accommodate peaks in
workload and another safety margin.
The MM/GM Consistency Group name must be unique across all Consistency Groups that
are known to the systems that own this Consistency Group. If the Consistency Group involves
two systems, the systems must be in communication throughout the creation process.
The new Consistency Group does not contain any relationships, and it is in the Empty state.
You can add MM/GM relationships to the group (upon creation or afterward) by using the
chrelationship command.
Optional parameter: If you do not use the -global optional parameter, an MM relationship
is created instead of a GM relationship.
The auxiliary volume must be equal in size to the master volume or the command fails. If both
volumes are in the same system, they must be in the same I/O Group. The master and
auxiliary volume cannot be in an existing relationship and they cannot be the targets of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when it
is successful.
When the MM/GM relationship is created, you can add it to a Consistency Group that exists
or it can be a stand-alone MM/GM relationship if no Consistency Group is specified.
When the command is issued, you can specify the master volume name and auxiliary system
to list the candidates that comply with the prerequisites to create an MM/GM relationship. If
the command is issued with no parameters, all of the volumes that are not disallowed by
another configuration state, such as being a FlashCopy target, are listed.
486 Implementing the IBM System Storage SAN Volume Controller V7.4
Adding an MM/GM relationship: When an MM/GM relationship is added to a
Consistency Group that is not empty, the relationship must have the same state and copy
direction as the group to be added to it.
When the command is issued, you can set the copy direction if it is undefined, and, optionally,
you can mark the auxiliary volume of the relationship as clean. The command fails if it is used
as an attempt to start a relationship that is already a part of a Consistency Group.
You can issue this command only to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship. The use of the -force parameter here is a reminder
that the data on the auxiliary becomes inconsistent while resynchronization (background
copying) occurs and, therefore, is unusable for DR purposes before the background copy
completes.
In the Idling state, you must specify the master volume to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.
For a Consistency Group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.
If the Consistency Group is in an inconsistent state, any copy operation stops and does not
resume until you issue the Storwizetask startrcconsistgrp command. Write activity is no
longer copied from the master to the auxiliary volumes that belong to the relationships in the
group. For a Consistency Group in the ConsistentSynchronized state, this command causes
a Consistency Freeze.
If the relationship is disconnected at the time that the command is issued, the relationship is
deleted only on the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the relationship on
both systems, you can issue the rmrcrelationship command independently on both of the
systems.
A relationship cannot be deleted if it is part of a Consistency Group. You must first remove the
relationship from the Consistency Group.
If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.
488 Implementing the IBM System Storage SAN Volume Controller V7.4
If the Consistency Group is disconnected at the time that the command is issued, the
Consistency Group is deleted only on the system on which the command is being run. When
the systems reconnect, the Consistency Group is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the Consistency
Group on both systems, you can issue the Storwizetask rmrcconsistgrp command
separately on both of the systems.
If the Consistency Group is not empty, the relationships within it are removed from the
Consistency Group before the group is deleted. These relationships then become
stand-alone relationships. The state of these relationships is not changed by the action of
removing them from the Consistency Group.
Important: Remember, by reversing the roles, your current source volumes become
targets and target volumes become source volumes. Therefore, you will lose write access
to your current primary volumes.
The primary component of your round-trip time is the physical distance between sites. For
every 1,000 kilometers (621.36 miles), you observe a 5-millisecond delay each way. This
delay does not include the time that is added by equipment in the path. Every device adds a
varying amount of time depending on the device, but a good rule is 25 microseconds for pure
hardware devices. For software-based functions (such as compression that is implemented in
applications), the added delay tends to be much higher (usually in the millisecond plus
range.) Next, we describe an example of a physical delay.
Company A has a production site that is 1,900 kilometers (1,180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide Fibre Channel over IP (FCIP) to encapsulate the FC traffic between sites.
Now, there are seven devices, and 1,900 kilometers (1,180.6 miles) of distance delay. All the
devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds
each way, for a total of 19 milliseconds. When combined with the device latency, the delay is
19.4 milliseconds of physical latency minimum, which is under the 80-millisecond limit of GM
until you realize that this number is the best case number.
The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link; therefore, ensure that you stay as far beneath the GM
round-trip-time (RTT) limit as possible. You can easily double or triple the expected physical
latency with a lower-quality network link or a lower-bandwidth network link. Then, you are
within the range of exceeding the limit if high I/O occurs that exceeds the existing bandwidth
capacity.
When you get a 1920 event, always check the latency first. The FCIP routing layer can
introduce latency if it is not correctly configured. If your network provider reports a much lower
latency, you might have a problem at your FCIP routing layer. Most FCIP routing devices have
built-in tools to allow you to check the RTT. When you are checking latency, remember that
TCP/IP routing devices (including FCIP routers) report RTT or round-trip time by using
standard 64-byte ping packets.
In Figure 8-41 on page 491, you can see why the effective transit time must be measured only
by using packets that are large enough to hold an FC frame, or 2,148 bytes (2,112 bytes of
payload and 36 bytes of header). Allow some overhead to be safe because various switch
vendors have optional features that might increase this size. After you verify your latency by
using the proper packet size, proceed with normal hardware troubleshooting.
Before we proceed, we look at the second largest component of your RTT, which is
serialization delay. Serialization delay is the amount of time that is required to move a packet
of data of a specific size across a network link of a certain bandwidth. The required time to
move a specific amount of data decreases as the data transmission rate increases.
Figure 8-41 on page 491 shows the orders of magnitude of difference between the link
bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient.
Never use a TCP/IP ping to measure RTT for FCIP traffic.
490 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-41 Effect of packet size (in bytes) versus the link size
In Figure 8-41, the amount of time in microseconds that is required to transmit a packet
across network links of varying bandwidth capacity is compared. The following packet sizes
are used:
64 bytes: The size of the common ping packet
1,500 bytes: The size of the standard TCP/IP packet
2.148 bytes: The size of an FC frame
Finally, your path maximum transmission unit (MTU) affects the delay that is incurred to get a
packet from one location to another location. An MTU might cause fragmentation or be too
large and cause too many retransmits when a packet is lost.
The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for zoning
of more than one host bus adapter (HBA) port for each node per I/O Group if your fabric has
more than 64 HBA ports that are zoned. One port for each node per I/O Group per fabric that
is associated with the host is the recommended zoning configuration for fabrics.
Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer through IBM Tivoli Storage Productivity
Center and comparing against your sample interval reveals potential SAN congestion. If a
zero buffer credit timer is above 2% of the total time of the sample interval, it might cause
problems.
Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences can indicate a larger problem.
If you receive multiple 1720 errors, recheck your network connection and then check the
Storwize partnership information to verify its status and settings. Then, proceed to perform
diagnostics for every piece of equipment in the path between the two Storwize systems. It
often helps to have a diagram that shows the path of your replication from both logical and
physical configuration viewpoints.
If your investigations fail to resolve your remote copy problems, contact your IBM Support
representative for a complete analysis.
492 Implementing the IBM System Storage SAN Volume Controller V7.4
9
Command prefix changes: The svctask and svcinfo command prefixes are no longer
needed when you are issuing a command. If you have existing scripts that use those
prefixes, they continue to function. You do not need to change your scripts.
When the command syntax is shown, you see certain parameters in square brackets, for
example [parameter]. These brackets indicate that the parameter is optional in most (if not
all) instances. Any information that is not in square brackets is required information. You can
view the syntax of a command by entering one of the following commands:
svcinfo -? shows a complete list of informational commands.
svctask -? shows a complete list of task commands.
svcinfo commandname -? shows the syntax of informational commands.
svctask commandname -? shows the syntax of task commands.
svcinfo commandname -filtervalue? shows the filters that you can use to reduce the
output of the informational commands.
Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask
commandname -h command.
If you review the syntax of the command by entering svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.
Using shortcuts
You can use the shortcuts command to display a list of display or execution commands. This
command produces an alphabetical list of actions that are supported. The command parameter
must be svcinfo for display commands or svctask for execution commands. The model
parameter allows for different shortcuts on different platforms, 2145 or 2076, as shown in the
following example:
494 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-1 Shortcut commands
IBM_2145:ITSO_SVC2:superuser>svctask shortcuts 2145
addhostiogrp
addhostport
addmdisk
addnode
addvdiskaccess
addvdiskcopy
applydrivesoftware
applysoftware
cancellivedump
cfgportip
charray
charraymember
chauthservice
chcontroller
chcurrentuser
chdrive
chemail
chemailserver
chemailuser
chenclosure
chenclosurecanister
chenclosureslot
chencryption
cherrstate
cheventlog
chfcconsistgrp
chfcmap
chhost
chiogrp
chldap
chldapserver
chlicense
chmdisk
chmdiskgrp
chnode
chnodebattery
chnodebootdrive
chnodehw
chpartnership
chquorum
chrcconsistgrp
chrcrelationship
chsecurity
chsite
chsnmpserver
chsyslogserver
chsystem
chsystemip
chuser
chusergrp
chvdisk
cleardumps
clearerrlog
Chapter 9. SAN Volume Controller operations using the command-line interface 495
cpdumps
detectmdisk
dumpallmdiskbadblocks
dumpauditlog
dumperrlog
dumpmdiskbadblocks
enablecli
expandvdisksize
finderr
includemdisk
migrateexts
migratetoimage
migratevdisk
mkarray
mkcloudmdisk
mkemailserver
mkemailuser
mkfcconsistgrp
mkfcmap
mkfcpartnership
mkhost
mkippartnership
mkldapserver
mkmdiskgrp
mkpartnership
mkrcconsistgrp
mkrcrelationship
mksnmpserver
mksyslogserver
mkuser
mkusergrp
mkvdisk
mkvdiskhostmap
movevdisk
ping
preplivedump
prestartfcconsistgrp
prestartfcmap
recoverarray
recoverarraybysystem
recovervdisk
recovervdiskbyiogrp
recovervdiskbysystem
repairsevdiskcopy
repairvdiskcopy
resetleds
rmarray
rmemailserver
rmemailuser
rmfcconsistgrp
rmfcmap
rmhost
rmhostiogrp
rmhostport
rmldapserver
496 Implementing the IBM System Storage SAN Volume Controller V7.4
rmmdisk
rmmdiskgrp
rmnode
rmpartnership
rmportip
rmportip_tms
rmrcconsistgrp
rmrcrelationship
rmsnmpserver
rmsyslogserver
rmuser
rmusergrp
rmvdisk
rmvdiskaccess
rmvdiskcopy
rmvdiskhostmap
sendinventoryemail
setdisktrace
setlocale
setpwdreset
setsystemtime
settimezone
settrace
shrinkvdisksize
splitvdiskcopy
startemail
startfcconsistgrp
startfcmap
startrcconsistgrp
startrcrelationship
startstats
starttrace
stopemail
stopfcconsistgrp
stopfcmap
stoprcconsistgrp
stoprcrelationship
stopsystem
stoptrace
switchrcconsistgrp
switchrcrelationship
testemail
triggerdrivedump
triggerenclosuredump
triggerlivedump
writesernum
Chapter 9. SAN Volume Controller operations using the command-line interface 497
The use of reverse-i-search
If you work on your SVC with the same PuTTY session for many hours and enter many
commands, scrolling back to find your previous or similar commands can be a time-intensive
task. In this case, the use of the reverse-i-search command can help you quickly and easily
find any command that you issued in the history of your commands by using the Ctrl+R keys.
By using Ctrl+R, you can interactively search through the command history as you enter
commands. Pressing Ctrl+R at an empty command prompt gives you a prompt, as shown in
Example 9-2.
As shown in Example 9-2, we ran a lsiogrp command. By pressing Ctrl+R and entering s,
the command that we needed was recalled from history.
In current SVC system, the disk space of a storage pool is from mdisks, so the
capacity of a storage pool depends on the mdisks' capacity. Creating/Splitting
storage pool is not free. User cannot freely make a storage pool with a particular
capacity they want. Child pool is a new object that is created from physical
storage pool and provides most of the functions that mdiskgrps have (e.g. vdisk
creation), but user can specify the capacity of the child pool at creation.
In EFS, GPFS has a lot of difficult requirements when creating or expanding the
filesystem. Users need to create the disks as network shared disks (NSDs) with the
description of the disk usage (data, metadata) which specified the type of data to
be stored on the disk, then use the NSDs to create the filesystem. It is not easy
for the users without GPFS experience to manage the filesystem and disks. By
defining an internal vdisk creation interface in child pool, EFS system can
automatically manage the filesystem disks without interaction with users. They
need to give a quota for metadata and data child pools. The filesystem disks are
automatically provisioned for GPFS. We try to hide the filesystem disk
manipulation from users.
498 Implementing the IBM System Storage SAN Volume Controller V7.4
security settings:
With this command, you change the security level for the GUI “Graphical User
Interface” access.
WARNING: Changing the security level could affect the GUI connection. If this
happens the SSH CLI can be used to change the security level to a known good
level.
To view the current security level, issue the command lssecurity using the SSH
CLI.
Volume protection:
Currently, if you issue the rmvdisk command you can delete a vdisk (volume) unless
that vdisk has a host mapping or is part of a flash, metro mirror, or global
mirror copy. In any of these exceptions, the rmvdisk command will fail and the
user will have to use the -force flag to override that failure.
When the vdisk protection is enabled, you will be protected for unintentionally
deleting a volume even with the -force parameter added, within whatever time
period you have decided on.
If the last I/O was within the specified time period, the rmvdisk command will
fail and the user will have to either wait until the volume really is considered
idle, or disable the system setting, delete/unmap the volume and re-enable the
setting.
IBM_2145:ITSO_SVC2:superuser>chsystem -vdiskprotectionenabled no
issue the command lssystem to verify that the vdiskprotection are enabled or
disabled
Chapter 9. SAN Volume Controller operations using the command-line interface 499
vdisk_protection_time 60
vdisk_protection_enabled yes
product_name IBM SAN Volume Controller
vdisk_protection_time 0
vdisk_protection_enabled no
product_name IBM SAN Volume Controller
Please note that the minimum time for vdiskprotection is 15 minutes and maximum
time is 1440 minutes.
SVC 7.4.0.0 includes command changes and the addition of attributes and variables for
several existing commands. For more information, see the command reference or help, which
is available at this website:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/landing/SVC_welcome.html
To display more detailed information about a specific controller, run the command again and
append the controller name parameter, for example, controller ID 4, as shown in Example 9-4.
500 Implementing the IBM System Storage SAN Volume Controller V7.4
WWPN 202600A0B85AD223
path_count 6
max_path_count 6
Choosing a new name: The chcontroller command specifies the new name first. You
can use letters A - Z, a to z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “controller” because this prefix is reserved for SVC assignment only.
This command displays the state of all discoveries in the clustered system. During discovery,
the system updates the drive and MDisk records. You must wait until the discovery finishes
and is inactive before you attempt to use the system. This command displays one of the
following results:
Active: A discovery operation is in progress at the time that the command is issued.
Inactive: No discovery operations are in progress at the time that the command is issued.
Chapter 9. SAN Volume Controller operations using the command-line interface 501
9.3.4 Discovering MDisks
The clustered system detects the MDisks automatically when they appear in the network.
However, certain Fibre Channel (FC) controllers do not send the required Small Computer
System Interface (SCSI) primitives that are necessary to automatically discover the new
MDisks.
If new storage was attached and the clustered system did not detect the new storage, you
might need to run this command before the system can detect the new MDisks.
Use the detectmdisk command to scan for newly added MDisks, as shown in Example 9-7.
To check whether any newly added MDisks were successfully detected, run the lsmdisk
command and look for new unmanaged MDisks.
If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk
subsystem and that the zones are set up correctly.
Discovery process: If you assigned many logical unit numbers (LUNs) to your SVC, the
discovery process can take time. Check several times by using the lsmdisk command to
see whether all the expected MDisks are present.
When all the disks that are allocated to the SVC are seen from the SVC system, the following
procedure is a useful way to verify the MDisks that are unmanaged and ready to be added to
the storage pool.
Alternatively, you can list all MDisks (managed or unmanaged) by running the lsmdisk
command, as shown in Example 9-9.
502 Implementing the IBM System Storage SAN Volume Controller V7.4
4 mdisk4 online managed 1 test_pool_01 128.0GB 0000000000000001
controller1
600507680282818b300000000000001f00000000000000000000000000000000 enterprise no
5 mdisk5 online managed 1 test_pool_01 128.0GB 0000000000000002
controller1
600507680282818b300000000000002000000000000000000000000000000000 enterprise no
From this output, you can see more information, such as the status, about each MDisk.
For our current task, we are interested only in the unmanaged disks because they are
candidates for a storage pool.
Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.
2. If not all of the MDisks that you expected are visible, rescan the available FC network by
entering the detectmdisk command, as shown in Example 9-10.
3. If you run the lsmdiskcandidate command again and your MDisk or MDisks are still not
visible, check that the LUNs from your subsystem were correctly assigned to the SVC and
that the appropriate zoning is in place (for example, the SVC can see the disk subsystem).
Chapter 9. SAN Volume Controller operations using the command-line interface 503
8 mdisk8 online managed 0 CompressedV7000 30.0GB 0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB 0000000000000001 DS 3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB 0000000000000004 DS 3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB 0000000000000000 DS 3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB 0000000000000003 DS 3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB 0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB 0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB 0000000000000002 DS 3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB 0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB 0000000000000005 DS 3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no
The summary for an individual MDisk is lsmdisk name or ID. Include the name or ID of the
MDisk from which you want the information, as shown in Example 9-12.
504 Implementing the IBM System Storage SAN Volume Controller V7.4
slow_write_priority
fabric_type fc
site_id 1
site_name site1
easy_tier_load high
encrypt no
The chmdisk command: The chmdisk command specifies the new name first. You can
use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name
can be 1 - 63 characters. However, the new name cannot start with a number, dash, or the
word “MDisk” because this prefix is reserved for SVC assignment only.
By running the lsmdisk command, you can see that mdisk8 is excluded, as shown in
Example 9-14.
Chapter 9. SAN Volume Controller operations using the command-line interface 505
6 mdisk6 online managed 0 CompressedV7000 30.0GB 0000000000000003 V7000_Gen2
6005076400820008380000000000000600000000000000000000000000000000 enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004 V7000_Gen2
6005076400820008380000000000000700000000000000000000000000000000 enterprise no
8 mdisk8 excluded managed 0 CompressedV7000 30.0GB 0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB 0000000000000001 DS 3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB 0000000000000004 DS 3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB 0000000000000000 DS 3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB 0000000000000003 DS 3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB 0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB 0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB 0000000000000002 DS 3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB 0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB 0000000000000005 DS 3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no
After the necessary corrective action is taken to repair the MDisk (replace the failed disk,
repair the SAN zones, and so on), we must include the MDisk again. We issue the
includemdisk command (Example 9-15) because the SVC system does not include the
MDisk automatically.
Running the lsmdisk command again shows that mdisk8 is online again, as shown in
Example 9-16.
506 Implementing the IBM System Storage SAN Volume Controller V7.4
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004 V7000_Gen2
6005076400820008380000000000000700000000000000000000000000000000 enterprise no
8 mdisk8 online managed 0 CompressedV7000 30.0GB 0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB 0000000000000001 DS 3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB 0000000000000004 DS 3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB 0000000000000000 DS 3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB 0000000000000003 DS 3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB 0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB 0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB 0000000000000002 DS 3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB 0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB 0000000000000005 DS 3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no
You can add only unmanaged MDisks to a storage pool. This command adds the MDisk
named mdisk6 to the storage pool that is named STGPool_Multi_Tier.
Important: Do not add this MDisk to a storage pool if you want to create an image mode
volume from the MDisk that you are adding. When you add an MDisk to a storage pool, it
becomes managed and extent mapping is not necessarily one-to-one anymore.
Example 9-18 lsmdisk -filtervalue: MDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC2:superuser>lsmdisk -filtervalue mdisk_grp_name=CompressedV7000
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier encrypt
Chapter 9. SAN Volume Controller operations using the command-line interface 507
6 mdisk6 online managed 0 CompressedV7000 30.0GB 0000000000000003
V7000_Gen2 6005076400820008380000000000000600000000000000000000000000000000
enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004
V7000_Gen2 6005076400820008380000000000000700000000000000000000000000000000
enterprise no
8 mdisk8 online managed 0 CompressedV7000 30.0GB 0000000000000005
V7000_Gen2 6005076400820008380000000000000800000000000000000000000000000000
enterprise no
By using a wildcard with this command, you can see all of the MDisks that are present in the
storage pools that are named CompressedV7000* (the asterisk (*) indicates a wildcard).
This section describes the operations that use MDisks and the storage pool. It also explains
the tasks that we can perform at the storage pool level.
Create a storage pool by using the mkmdiskgrp command, as shown in Example 9-19.
This command creates a storage pool that is called CompressedV7000. The extent size that is
used within this group is 256 MiB. We did not add any MDisks to the storage pool, so it is an
empty storage pool.
You can add unmanaged MDisks and create the storage pool in the same command. Use the
mkmdiskgrp command with the -mdisk parameter and enter the IDs or names of the MDisks to
add the MDisks immediately after the storage pool is created.
Before the creation of the storage pool, enter the lsmdisk command, as shown in
Example 9-20. This command lists all of the available MDisks that are seen by the SVC
system.
508 Implementing the IBM System Storage SAN Volume Controller V7.4
1 mdisk1 online unmanaged 100.0GB
0000000000000001 V7000_Gen2
6005076400820008380000000000000100000000000000000000000000000000 enterprise no
2 mdisk2 online managed 3 DS3400_pool1 100.0GB
0000000000000002 V7000_Gen2
6005076400820008380000000000000200000000000000000000000000000000 enterprise no
3 mdisk3 online managed 1 test_pool_01 128.0GB
0000000000000000 controller1
600507680282818b300000000000001e00000000000000000000000000000000 enterprise no
4 mdisk4 online managed 1 test_pool_01 128.0GB
0000000000000001 controller1
600507680282818b300000000000001f00000000000000000000000000000000 enterprise no
5 mdisk5 online managed 1 test_pool_01 128.0GB
0000000000000002 controller1
600507680282818b300000000000002000000000000000000000000000000000 enterprise no
6 mdisk6 online managed 0 CompressedV7000 30.0GB
0000000000000003 V7000_Gen2
6005076400820008380000000000000600000000000000000000000000000000 enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB
0000000000000004 V7000_Gen2
6005076400820008380000000000000700000000000000000000000000000000 enterprise no
8 mdisk8 online managed 0 CompressedV7000 30.0GB
0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB
0000000000000001 DS_3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB
0000000000000004 DS_3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB
0000000000000000 DS_3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB
0000000000000003 DS_3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB
0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB
0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB
0000000000000002 DS_3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB
0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB
0000000000000005 DS_3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no
IBM_2145:ITSO_SVC2:superuser>
By using the same command (mkmdiskgrp) and knowing the MDisk IDs that we are using, we
can add multiple MDisks to the storage pool at the same time. We now add the unmanaged
MDisks to the storage pool that we created, as shown in Example 9-21 on page 510.
Chapter 9. SAN Volume Controller operations using the command-line interface 509
Example 9-21 Creating a storage pool and adding available MDisks
IBM_2145:ITSO_SVC2:superuser>mkmdiskgrp -name ITSO_Pool1 -ext 256 -mdisk 0:1
MDisk Group, id [2], successfully created
This command creates a storage pool that is called ITSO_Pool1. The extent size that is used
within this group is 256 MiB, and two MDisks (6 and 8) are added to the storage pool.
Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a
-name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the
SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created.
If you want to provide a name, you can use letters A - Z, a - z, numbers 0 - 9, and the
underscore (_). The name can be 1 - 63 characters, but it cannot start with a number or the
word “MDiskgrp” because this prefix is reserved for SVC assignment only.
By running the lsmdisk command, you now see the MDisks as managed and as part of the
CompressedV7000, as shown in Example 9-22.
In SVC 7.4, you can also create a child pool, which is a storage pool that is inside a parent
pool (Example 9-24).
510 Implementing the IBM System Storage SAN Volume Controller V7.4
atus:compression_active:compression_virtual_capacity:compression_compressed_capaci
ty:compression_uncompressed_capacity:parent_mdisk_grp_id:parent_mdisk_grp_name:chi
ld_mdisk_grp_count:child_mdisk_grp_capacity:type:encrypt
0:CompressedV7000:online:3:0:90.00GB:1024:90.00GB:0.00MB:0.00MB:0.00MB:0:80:auto:b
alanced:no:0.00MB:0.00MB:0.00MB:0:CompressedV7000:0:0.00MB:parent:no
1:test_pool_01:online:3:14:381.00GB:1024:366.00GB:14.00GB:11.00GB:11.07GB:3:80:off
:inactive:no:0.00MB:0.00MB:0.00MB:1:test_pool_01:0:0.00MB:parent:no
2:MigrationPool_8192:online:0:0:0:8192:0:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no
:0.00MB:0.00MB:0.00MB:2:MigrationPool_8192:0:0.00MB:parent:
3:DS3400_pool1:online:1:8:100.00GB:1024:42.00GB:62.00GB:57.00GB:57.02GB:62:80:auto
:balanced:no:0.00MB:0.00MB:0.00MB:3:DS3400_pool1:0:0.00MB:parent:no
5:Migration_Out:online:0:0:0:1024:0:0.00MB:0.00MB:0.00MB:0:80:auto:balanced:no:0.0
0MB:0.00MB:0.00MB:5:Migration_Out:0:0.00MB:parent:
6:MigrationPool_1024:online:0:0:0:1024:0:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no
:0.00MB:0.00MB:0.00MB:6:MigrationPool_1024:0:0.00MB:parent:
Changing the storage pool: The chmdiskgrp command specifies the new name first. You
can use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “mdiskgrp” because this prefix is reserved for SVC assignment only.
Chapter 9. SAN Volume Controller operations using the command-line interface 511
This command removes storage pool STGPool_DS3500-2_new from the SVC system
configuration.
Removing a storage pool from the SVC system configuration: If there are MDisks
within the storage pool, you must use the -force flag to remove the storage pool from the
SVC system configuration, as shown in the following example:
rmmdiskgrp STGPool_DS3500-2_new -force
Confirm that you want to use this flag because it destroys all mapping information and data
that is held on the volumes. The mapping information and data cannot be recovered.
This command removes the MDisk with ID 8 from the storage pool with ID 2. The -force flag
is set because volumes are using this storage pool.
Sufficient space: The removal occurs only space is sufficient to migrate the volume’s data
to other extents on other MDisks that remain in the storage pool. After you remove the
MDisk from the storage pool, changing the mode from managed to unmanaged takes time,
depending on the size of the MDisk that you are removing.
Host is powered on, connected, and zoned to the SAN Volume Controller
When you create your host on the SVC, it is a preferred practice to check whether the host
bus adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By
checking, you ensure that zoning is done and that the correct WWPN is used. Run the
lshbaportcandidate command, as shown in Example 9-28.
512 Implementing the IBM System Storage SAN Volume Controller V7.4
After you know the WWPNs that are displayed, match your host (use host or SAN switch
utilities to verify) and use the mkhost command to create your host.
Name: If you do not provide the -name parameter, the SVC automatically generates the
name hostx (where x is the ID sequence number that is assigned by the SVC internally).
You can use the letters A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore
(_). The name can be 1 - 63 characters. However, the name cannot start with a number,
dash, or the word “host” because this prefix is reserved for SVC assignment only.
This command creates a host that is called Almaden that uses WWPN
21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA.
Ports: You can define 1 - 8 ports per host, or you can use the addport command, which is
shown in 9.4.5, “Adding ports to a defined host” on page 516.
In this case, you can enter the WWPN of your HBA or HBAs and use the -force flag to create
the host, regardless of whether they are connected, as shown in Example 9-30.
This command forces the creation of a host that is called Almaden that uses WWPN
210000E08B89C1CD:210000E08B054CAA.
Chapter 9. SAN Volume Controller operations using the command-line interface 513
The iSCSI functionality allows the host to access volumes through the SVC without being
attached to the SAN. Back-end storage and node-to-node communication still need the FC
network to communicate, but the host does not necessarily need to be connected to the SAN.
When we create a host that uses iSCSI as a communication method, iSCSI initiator software
must be installed on the host to initiate the communication between the SVC and the host.
This installation creates an iSCSI qualified name (IQN) identifier that is needed before we
create our host.
Before we start, we check our server’s IQN address (we are running Windows Server 2008).
We select Start → Programs → Administrative tools, and we select iSCSI initiator. The
IQN in our example is iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com, as
shown in Figure 9-1.
We create the host by issuing the mkhost command, as shown in Example 9-31. When the
command completes successfully, we display our created host.
514 Implementing the IBM System Storage SAN Volume Controller V7.4
iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 0
state offline
Important: When the host is initially configured, the default authentication method is set to
no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is
set. To set a CHAP secret for authenticating the iSCSI host with the SVC system, use the
chhost command with the chapsecret parameter.
The host definition is created. We map a volume to our new iSCSI server, as shown in
Example 9-32. We created the volume, as described in 9.6.1, “Creating a volume” on
page 520. In our scenario, our volume’s ID is 21 and the host name is Baldur. We map it to
our iSCSI host.
After the volume is mapped to the host, we display the host information again, as shown in
Example 9-33.
Tip: FC hosts and iSCSI hosts are handled in the same way operationally after they are
created.
If you must display a CHAP secret for a defined server, use the lsiscsiauth command. The
lsiscsiauth command lists the CHAP secret that is configured for authenticating an entity to
the SVC system.
Chapter 9. SAN Volume Controller operations using the command-line interface 515
9.4.3 Modifying a host
Use the chhost command to change the name of a host. To verify the change, run the lshost
command. Example 9-34 shows both of these commands.
IBM_2145:ITSO_SVC2:superuser>lshost
id name port_count iogrp_count
0 Palau 2 4
1 Nile 2 1
2 Kanaga 2 1
3 Siam 2 2
4 Angola 1 4
Host name: The chhost command specifies the new name first. You can use letters A - Z
and a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name can be 1 - 63
characters. However, it cannot start with a number, dash, or the word “host” because this
prefix is reserved for SVC assignment only.
Hosts that require the -type parameter: If you use Hewlett-Packard UNIX (HP-UX), you
use the -type option. For more information about the hosts that require the -type
parameter, see IBM System Storage Open Software Family SAN Volume Controller: Host
Attachment Guide, SC26-7563.
The command that is shown in Example 9-35 deletes the host that is called Angola from the
SVC configuration.
Deleting a host: If any volumes are assigned to the host, you must use the -force flag, for
example, rmhost -force Angola.
516 Implementing the IBM System Storage SAN Volume Controller V7.4
If your host is connected through SAN with FC and if the WWPN is zoned to the SVC system,
issue the lshbaportcandidate command to compare with the information that you have from
the server administrator, as shown in Example 9-36.
Use host or SAN switch utilities to verify whether the WWPN matches your information. If the
WWPN matches your information, use the addhostport command to add the port to the host,
as shown in Example 9-37.
Adding multiple ports: You can add multiple ports at one time by using the separator or
colon (:) between WWPNs, as shown in the following example:
addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau
If the new HBA is not connected or zoned, the lshbaportcandidate command does not
display your WWPN. In this case, you can manually enter the WWPN of your HBA or HBAs
and use the -force flag to create the host, as shown in Example 9-38.
This command forces the addition of the WWPN that is named 210000E08B054CAA to the host
called Palau.
If you run the lshost command again, you can see your host with an updated port count of 2,
as shown in Example 9-39.
If your host uses iSCSI as a connection method, you must have the new iSCSI IQN ID before
you add the port. Unlike FC-attached hosts, you cannot check for available candidates with
iSCSI.
Chapter 9. SAN Volume Controller operations using the command-line interface 517
After you acquire the other iSCSI IQN, use the addhostport command, as shown in
Example 9-40.
Before you remove the WWPN, ensure that it is the correct WWPN by issuing the lshost
command, as shown in Example 9-41.
When you know the WWPN or iSCSI IQN, use the rmhostport command to delete a host
port, as shown in Example 9-42.
This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI
IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.
Removing multiple ports: You can remove multiple ports at one time by using the
separator or colon (:) between the port names, as shown in the following example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
518 Implementing the IBM System Storage SAN Volume Controller V7.4
9.5 Working with the Ethernet port for iSCSI
In this section, we describe the commands that are used for setting, changing, and displaying
the SVC Ethernet port for iSCSI configuration.
Example 9-43 shows the lsportip command that lists the iSCSI IP addresses that are
assigned for each port on each node in the system.
Chapter 9. SAN Volume Controller operations using the command-line interface 519
Example 9-44 shows how the cfgportip command assigns an IP address to each node
Ethernet port for iSCSI I/O.
When a volume is created, you must enter several parameters at the CLI. Mandatory and
optional parameters are available.
Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.
When you are ready to create a volume, you must know the following information before you
start to create the volume:
In which storage pool the volume has its extents
From which I/O Group the volume is accessed
Which SVC node is the preferred node for the volume
Size of the volume
Name of the volume
Type of the volume
Whether this volume is managed by Easy Tier to optimize its performance
When you are ready to create your striped volume, use the mkvdisk command. In
Example 9-45, this command creates a 10 GB striped volume with volume ID 20 within the
storage pool STGPool_DS3500-2 and assigns it to the io_grp0 I/O Group. Its preferred node is
node 1.
520 Implementing the IBM System Storage SAN Volume Controller V7.4
To verify the results, use the lsvdisk command, as shown in Example 9-46.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name STGPool_DS3500-2
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB
Chapter 9. SAN Volume Controller operations using the command-line interface 521
9.6.2 Volume information
Use the lsvdisk command to display summary information about all volumes that are
defined within the SVC environment. To display more detailed information about a specific
volume, run the command again and append the volume name parameter or the volume ID.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool_DS3500-1
type striped
522 Implementing the IBM System Storage SAN Volume Controller V7.4
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB
This command creates a space-efficient 10 GB volume. The volume belongs to the storage
pool that is named STGPool_DS3500-2 and is owned by the io_grp0 I/O Group. The real
capacity automatically expands until the volume size of 10 GB is reached. The grain size is
set to 32 K, which is the default.
Disk size: When the -rsize parameter is used, you have the following options: disk_size,
disk_size_percentage, and auto.
Specify the units for a disk_size integer by using the -unit parameter; the default is MB.
The -rsize value can be greater than, equal to, or less than the size of the volume.
The auto option creates a volume copy that uses the entire size of the MDisk. If you specify
the -rsize auto option, you must also specify the -vtype image option.
Chapter 9. SAN Volume Controller operations using the command-line interface 523
9.6.4 Creating a volume in image mode
This virtualization type allows an image mode volume to be created when an MDisk has data
on it, perhaps from a pre-virtualized subsystem. When an image mode volume is created, it
directly corresponds to the previously unmanaged MDisk from which it was created.
Therefore, except for a thin-provisioned image mode volume, the volume’s logical block
address (LBA) x equals MDisk LBA x.
You can use this command to bring a non-virtualized disk under the control of the clustered
system. After it is under the control of the clustered system, you can migrate the volume from
the single managed disk.
When the first MDisk extent is migrated, the volume is no longer an image mode volume. You
can add an image mode volume to an already populated storage pool with other types of
volumes, such as striped or sequential volumes.
Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That
is, the minimum size that can be specified for an image mode volume must be the same as
the storage pool extent size to which it is added, with a minimum of 16 MiB.
You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.
Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks
and the remaining space on the larger MDisk is inaccessible.
If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.
Use the mkvdisk command to create an image mode volume, as shown in Example 9-49.
This command creates an image mode volume that is called Image_Volume_A that uses the
mdisk10 MDisk. The volume belongs to the storage pool STGPool_DS3500-1 and the volume is
owned by the io_grp0 I/O Group.
If we run the lsvdisk command again, the volume that is named Image_Volume_A has a
status of image, as shown in Example 9-50.
524 Implementing the IBM System Storage SAN Volume Controller V7.4
9.6.5 Adding a mirrored volume copy
You can create a mirrored copy of a volume, which keeps a volume accessible even when the
MDisk on which it depends becomes unavailable. You can create a copy of a volume on
separate storage pools or by creating an image mode copy of the volume. Copies increase
the availability of data; however, they are not separate objects. You can create or change
mirrored copies from the volume only.
In addition, you can use volume mirroring as an alternative method of migrating volumes
between storage pools.
For example, if you have a non-mirrored volume in one storage pool and want to migrate that
volume to another storage pool, you can add a copy of the volume and specify the second
storage pool. After the copies are synchronized, you can delete the copy on the first storage
pool. The volume is copied to the second storage pool while remaining online during the copy.
To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds
a copy of the chosen volume to the selected storage pool, which changes a non-mirrored
volume into a mirrored volume.
In the following scenario, we show creating a mirrored volume from one storage pool to
another storage pool.
As you can see in Example 9-51, the volume has a copy with copy_id 0.
copy_id 0
status online
Chapter 9. SAN Volume Controller operations using the command-line interface 525
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
In Example 9-52, we add the volume copy mirror by using the addvdiskcopy command.
During the synchronization process, you can see the status by using the
lsvdisksyncprogress command. As shown in Example 9-53, the first time that the status is
checked, the synchronization progress is at 48%, and the estimated completion time is
11:09:26. The second time that the command is run, the progress status is at 100%, and the
synchronization is complete.
As you can see in Example 9-54, the new mirrored volume copy (copy_id 1) was added and
can be seen by using the lsvdisk command.
526 Implementing the IBM System Storage SAN Volume Controller V7.4
mdisk_grp_name many
capacity 1.00GB
type many
formatted no
mdisk_id many
mdisk_name many
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000019
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 2
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 1.00GB
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
Chapter 9. SAN Volume Controller operations using the command-line interface 527
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
While you are adding a volume copy mirror, you can define a mirror with different parameters
to the volume copy. Therefore, you can define a thin-provisioned volume copy for a
non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned
volume to a thin-provisioned volume.
Volume copy mirror parameters: To change the parameters of a volume copy mirror, you
must delete the volume copy and redefine it with the new values.
Now, we can change the name of the volume that was mirrored from Volume_no_mirror to
Volume_mirrored, as shown in Example 9-55.
Example 9-56 shows the splitvdiskcopy command, which is used to split a mirrored volume.
It creates a volume that is named Volume_new from the volume that is named
Volume_mirrored.
As you can see in Example 9-57 on page 529, the new volume that is named Volume_new was
created as an independent volume.
528 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-57 lsvdisk command
IBM_2145:ITSO_SVC2:superuser>lsvdisk Volume_new
id 24
name Volume_new
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001A
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
Chapter 9. SAN Volume Controller operations using the command-line interface 529
By issuing the command that is shown in Example 9-56 on page 528, Volume_mirrored does
not have its mirrored copy and a volume is created automatically.
You can specify a new name or label. The new name can be used to reference the volume.
The I/O Group with which this volume is associated can be changed. Changing the I/O Group
with which this volume is associated requires a flush of the cache within the nodes in the
current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host
level before you perform this operation.
Tips: If the volume has a mapping to any hosts, it is impossible to move the volume to an
I/O Group that does not include any of those hosts.
This operation fails if insufficient space exists to allocate bitmaps for a mirrored volume in
the target I/O Group.
If the -force parameter is used and the system is unable to destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.
If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.
Base the choice between I/O and MB as the I/O governing throttle on the disk access profile
of the application. Database applications generally issue large amounts of I/O, but they
transfer only a relatively small amount of data. In this case, setting an I/O governing throttle
that is based on MB per second does not achieve much. It is better to use an I/Os per second
as a second throttle.
At the other extreme, a streaming video application generally issues a small amount of I/O,
but it transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle that is based on I/Os per second does not achieve much, so it is better to
use an MB per second throttle.
I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of
the lsvdisk command) does not mean that zero I/Os per second (or MB per second) can
be achieved. It means that no throttle is set.
530 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-58 chvdisk command
IBM_2145:ITSO_SVC2:superuser>chvdisk -rate 20 -unitmb volume_7
IBM_2145:ITSO_SVC2:superuser>chvdisk -warning 85% volume_7
New name first: The chvdisk command specifies the new name first. The name can
consist of letters A - Z and a - z, numbers 0 - 9, the dash (-), and the underscore (_). It can
be 1 - 63 characters. However, it cannot start with a number, dash, or the word “vdisk”
because this prefix is reserved for SVC assignment only.
The first command changes the volume throttling of volume_7 to 20 MBps. The second
command changes the thin-provisioned volume warning to 85%. To verify the changes, issue
the lsvdisk command, as shown in Example 9-59.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
Chapter 9. SAN Volume Controller operations using the command-line interface 531
autoexpand on
warning 85
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
2.02GB
If any remote copy, FlashCopy, or host mappings still exist for this volume, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the volume and any
volume to host mappings and copy mappings.
If the volume is the subject of a “migrate to image mode” process, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the volume.
If the command succeeds (without the -force flag) for an image mode volume, the underlying
back-end controller logical unit is consistent with the data that a host might previously read
from the image mode volume. That is, all fast write data was flushed to the underlying LUN. If
the -force flag is used, consistency is not guaranteed.
If any non-destaged data exists in the fast write cache for this volume, the deletion of the
volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write
cache is deleted.
Use the rmvdisk command to delete a volume from your SVC configuration, as shown in
Example 9-60.
This command deletes the volume_A volume from the SVC configuration. If the volume is
assigned to a host, you must use the -force flag to delete the volume, as shown in
Example 9-61.
532 Implementing the IBM System Storage SAN Volume Controller V7.4
Assuming that your operating systems support expansion, you can use the expandvdisksize
command to increase the capacity of a volume, as shown in Example 9-62.
This command expands the volume_C volume (which was 35 GB) by another 5 GB to give it a
total size of 40 GB.
To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 9-63. This command changes the real size of the volume_B volume to a real capacity
of 55 GB. The capacity of the volume is unchanged.
Important: If a volume is expanded, its type becomes striped even if it was previously
sequential or in image mode. If enough extents are not available to expand your volume to
the specified size, you receive the following error message:
CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
Chapter 9. SAN Volume Controller operations using the command-line interface 533
9.6.11 Assigning a volume to a host
Use the mkvdiskhostmap command to map a volume to a host. When run, this command
creates a mapping between the volume and the specified host, which presents this volume to
the host as though the disk was directly attached to the host. It is only after this command is
run that the host can perform I/O to the volume. Optionally, a SCSI LUN ID can be assigned to
the mapping.
When the HBA on the host scans for devices that are attached to it, the HBA discovers all of
the volumes that are mapped to its FC ports. When the devices are found, each one is
allocated an identifier (SCSI LUN ID).
For example, the first disk that is found is generally SCSI LUN 1. You can control the order in
which the HBA discovers volumes by assigning the SCSI LUN ID, as required. If you do not
specify a SCSI LUN ID, the system automatically assigns the next available SCSI LUN ID,
based on any mappings that exist with that host.
By using the volume and host definition that we created in the previous sections, we assign
volumes to hosts that are ready for their use. We use the mkvdiskhostmap command, as
shown in Example 9-64.
This command displays volume_B and volume_C that are assigned to host Almaden, as shown
in Example 9-65.
Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can
help assign a specific LUN ID to a volume that is to be associated with a host. The default
(if nothing is specified) is to increment based on what is already assigned to the host.
Certain HBA device drivers stop when they find a gap in the SCSI LUN IDs, as shown in the
following examples:
Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
Volume 3 is mapped to Host 1 with SCSI LUN ID 4.
When the device driver scans the HBA, it might stop after discovering volumes 1 and 2
because no SCSI LUN is mapped with ID 3.
534 Implementing the IBM System Storage SAN Volume Controller V7.4
It is not possible to map a volume to a host more than one time at separate LUNs
(Example 9-66).
This command maps the volume that is called volume_A to the host that is called Siam.
All tasks that are required to assign a volume to an attached host are complete.
From this command, you can see that the host Siam has only one assigned volume that is
called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is
presented to the host. If no host is specified, all defined host-to-volume mappings are
returned.
Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, in this case, you must specify this flag before the host
name. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.
This command unmaps the volume that is called volume_D from the host that is called Tiger.
Chapter 9. SAN Volume Controller operations using the command-line interface 535
9.6.14 Migrating a volume
You might want to migrate volumes from one set of MDisks to another set of MDisks to
decommission an old disk subsystem to have better balanced performance across your
virtualized environment, or to migrate data into the SVC environment transparently by using
image mode. For more information about migration, see Chapter 6, “Data migration” on
page 241.
As you can see from the parameters that are shown in Example 9-69, before you can migrate
your volume, you must know the name of the volume that you want to migrate and the name
of the storage pool to which you want to migrate it. To discover the names, run the lsvdisk
and lsmdiskgrp commands.
After you know these details, you can run the migratevdisk command, as shown in
Example 9-69.
Tips: If insufficient extents are available within your target storage pool, you receive an
error message. Ensure that the source MDisk group and target MDisk group have the
same extent size.
By using the optional threads parameter, you can assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.
You can run the lsmigrate command at any time to see the status of the migration process,
as shown in Example 9-70.
IBM_2145:ITSO_SVC2:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id
0
536 Implementing the IBM System Storage SAN Volume Controller V7.4
Progress: The progress is shown as percent complete. If you receive no more replies, it
means that the process finished.
To migrate a fully managed volume to an image mode volume, the following rules apply:
The destination MDisk must be greater than or equal to the size of the volume.
The MDisk that is specified as the target must be in an unmanaged state.
Regardless of the mode in which the volume starts, it is reported as a managed mode
during the migration.
Both of the MDisks that are involved are reported as being image mode volumes during
the migration.
If the migration is interrupted by a system recovery or cache problem, the migration
resumes after the recovery completes.
In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put
into the STGPool_IMAGE storage pool.
You can use this command to shrink the physical capacity that is allocated to a particular
volume by the specified amount. You also can use this command to shrink the virtual capacity
of a thin-provisioned volume without altering the physical capacity that is assigned to the
volume. Use the following parameters:
For a non-thin-provisioned volume, use the -size parameter.
For a thin-provisioned volume’s real capacity, use the -rsize parameter.
For the thin-provisioned volume’s virtual capacity, use the -size parameter.
When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
Chapter 9. SAN Volume Controller operations using the command-line interface 537
The system arbitrarily reduces the capacity of the volume by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the volume. You cannot
control which extents are removed; therefore, you cannot assume that it is unused space that
is removed.
Image mode volumes cannot be reduced in size. Instead, they must first be migrated to fully
managed mode. To run the shrinkvdisksize command on a mirrored volume, all copies of
the volume must be synchronized.
Important: Consider the following guidelines when you are shrinking a disk:
If the volume contains data, do not shrink the disk.
Certain operating systems or file systems use the outer edge of the disk for
performance reasons. This command can shrink a FlashCopy target volume to the
same capacity as the source.
Before you shrink a volume, validate that the volume is not mapped to any host objects.
If the volume is mapped, data is displayed. You can determine the exact capacity of the
source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the volume by the required amount by issuing the shrinkvdisksize -size
disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.
Assuming that your operating system supports it, you can use the shrinkvdisksize command
to decrease the capacity of a volume, as shown in Example 9-72.
This command shrinks a volume that is called volume_D from a total size of 80 GB by 44 GB,
to a new total size of 36 GB.
This command displays a list of all of the volume IDs that correspond to the volume copies
that use mdisk8.
To correlate the IDs that are displayed in this output to volume names, we can run the
lsvdisk command. For more information, see 9.6, “Working with volumes” on page 520.
538 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-74 lsvdisk -filtervalue: VDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC2:superuser>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,0,no
9,W2K3_SRV2_VOL03,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
00000000000000A,0,1,empty,0,0,no
10,W2K3_SRV2_VOL04,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000B,0,1,empty,0,0,no
11,W2K3_SRV2_VOL05,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000C,0,1,empty,0,0,no
12,W2K3_SRV2_VOL06,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000D,0,1,empty,0,0,no
16,AIX_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,20.00GB,striped,,,,,6005076801AF813F1
000000000000011,0,1,empty,0,0,no
If you want to know more about these MDisks, you can run the lsmdisk command, as
described in 9.2, “New commands and functions” on page 498 (by using the ID that is
displayed in Example 9-75 rather than the name).
9.6.20 Showing from which storage pool a volume has its extents
Use the lsvdisk command to show to which storage pool a specific volume belongs, as
shown in Example 9-76.
Chapter 9. SAN Volume Controller operations using the command-line interface 539
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001E
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 2.02GB
To learn more about these storage pools, you can run the lsmdiskgrp command, as
described in 9.3.10, “Working with a storage pool” on page 508.
540 Implementing the IBM System Storage SAN Volume Controller V7.4
This command shows the host or hosts to which the volume_B volume was mapped. Duplicate
entries are normal because more paths exist between the clustered system and the host. To
be sure that the operating system on the host sees the disk only one time, you must install
and configure a multipath software application, such as IBM Subsystem Driver (SDD).
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.
This command shows which volumes are mapped to the host called Almaden.
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.
Instead, you must enter the command that is shown in Example 9-79 from your multipath
command prompt.
Chapter 9. SAN Volume Controller operations using the command-line interface 541
1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
State: In Example 9-79, the state of each path is OPEN. Sometimes, the state is
CLOSED. This state does not necessarily indicate a problem because it might be a
result of the path’s processing stage.
2. Run the lshostvdiskmap command to return a list of all assigned volumes, as shown in
Example 9-80.
Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as Almaden.
3. Run the lsvdiskmember vdiskname command for the MDisk or a list of the MDisks that
make up the specified volume, as shown in Example 9-81.
4. Query the MDisks with the lsmdisk mdiskID command to discover their controller and
LUN information, as shown in Example 9-82. The output displays the controller name and
the controller LUN ID to help you to track back to a LUN within the disk subsystem (if you
gave your controller a unique name, such as a serial number). See Example 9-82.
542 Implementing the IBM System Storage SAN Volume Controller V7.4
capacity 128.0GB
quorum_index 1
block_size 512
controller_name ITSO-DS3500
ctrl_type 4
ctrl_WWNN 20080080E51B09E8
controller_id 2
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000
preferred_WWPN 20580080E51B09E8
active_WWPN 20580080E51B09E8
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd
9.7 Scripting under the CLI for SAN Volume Controller task
automation
Command prefix changes: The svctask and svcinfo command prefixes are no longer
necessary when a command is run. If you have existing scripts that use those prefixes,
they continue to function. You do not need to change the scripts.
The use of scripting constructs works better for the automation of regular operational jobs.
You can use available shells to develop scripts. Scripting enhances the productivity of SVC
administrators and the integration of their storage virtualization environment. You can create
your own customized scripts to automate many tasks for completion at various times and run
them through the CLI.
We suggest that you keep the scripting as simple as possible in large SAN environments
where scripting commands are used. It is harder to manage fallback, documentation, and the
verification of a successful script before execution in a large SAN environment.
In this section, we present an overview of how to automate various tasks by creating scripts
by using the SVC CLI.
Chapter 9. SAN Volume Controller operations using the command-line interface 543
C re a te
c o n n e c tio n
(S S H ) to th e
SVC
S c h e d u le d
a c tiv a t io n
R u n th e or
ccoommmmaanndds M anual
a c tiv a t io n
P e r fo r m
lo g g in g
Secure Shell Key: The use of a Secure Shell (SSH) key is optional. (You can use a user
ID and password to access the system.) However, we suggest the use of an SSH key for
security reasons. We provide a sample of its use in this section.
When you create a connection to the SVC, you must have access to a public key that
corresponds to a public key that was previously uploaded to the SVC if you are running a
script.
The key is used to establish the SSH connection that is needed to use the CLI on the SVC. If
the SSH keypair is generated without a passphrase, you can connect without the need of
special scripting to parse in the passphrase.
On UNIX systems, you can use the ssh command to create an SSH connection with the SVC.
On Windows systems, you can use a utility that is called plink.exe (which is provided with the
PuTTY tool) to create an SSH connection with the SVC. In the following examples, we use
plink to create the SSH connection to the SVC.
When you use the CLI, not all commands provide a response to determine the status of the
started command. Therefore, always create checks that can be logged for monitoring and
troubleshooting purposes.
544 Implementing the IBM System Storage SAN Volume Controller V7.4
Connecting to the SAN Volume Controller by using a predefined SSH
connection
The easiest way to create an SSH connection to the SVC is when plink can call a predefined
PuTTY session.
The private key for authentication (for example, icat.ppk). This key is the private key that
you created. Set this parameter by clicking Connection → Session → Auth, as shown in
Figure 9-4 on page 546.
Chapter 9. SAN Volume Controller operations using the command-line interface 545
Figure 9-4 An ssh private key configuration
The IP address of the SVC clustered system. Set this parameter by clicking Session, as
shown in Figure 9-5.
546 Implementing the IBM System Storage SAN Volume Controller V7.4
– If a predefined PuTTY session is not used, use the following syntax:
plink superuser@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK"
IBM provides a suite of scripting tools that are based on Perl. You can download these
scripting tools from this website:
http://www.alphaworks.ibm.com/tech/svctools
Important command prefix changes: The svctask and svcinfo command prefixes are
no longer necessary when you are running a command. If you have existing scripts that
use those prefixes, they continue to function. You do not need to change the scripts.
When the command syntax is shown, you see several parameters in square brackets, for
example, [parameter]. The square brackets indicate that the parameter is optional in most if
not all instances. Any parameter that is not in square brackets is required information. You
can view the syntax of a command by entering one of the following commands:
svcinfo -? shows a complete list of information commands.
svctask -? shows a complete list of task commands.
svcinfo commandname -? shows the syntax of information commands.
svctask commandname -? shows the syntax of task commands.
svcinfo commandname -filtervalue? shows the filters that you can use to reduce the
output of the information commands.
Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname
-h.
If you review the syntax of the command by entering svcinfo command name -?, you often see
-filter listed as a parameter. The correct parameter is -filtervalue.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were issued recently. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.
Chapter 9. SAN Volume Controller operations using the command-line interface 547
9.8.2 Organizing on window content
There are instances in which the output of a command can be long and difficult to read in the
window. If you need information about a subset of the total number of available items, you can
use filtering to reduce the output to a more manageable size.
Filtering
To reduce the output that is displayed by a command, you can specify a number of filters,
depending on the command that you are running. To see which filters are available, enter the
command followed by the -filtervalue? flag, as shown in Example 9-83.
vdisk_UID
fc_map_count
copy_count
fast_write_state
se_copy_count
filesystem
preferred_node_id
mirror_write_priority
RC_flash
When you know the filters, you can be more selective in generating output. Consider the
following points:
Multiple filters can be combined to create specific searches.
You can use an asterisk (*) as a wildcard when names are used.
When capacity is used, the units must also be specified by using -u b | kb | mb | gb | tb |
pb.
For example, if we run the lsvdisk command with no filters but with the -delim parameter, we
see the output that is shown in Example 9-84 on page 549.
548 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-84 lsvdisk command: No filters
IBM_2145:ITSO_SVC2:superuser>lsvdisk -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
0,ESXI_SRV1_VOL01,1,io_grp1,online,many,many,100.00GB,many,,,,,6005076801AF813F100000000000
0014,0,2,empty,0,no
1,volume_7,0,io_grp0,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F10000000
0000001F,0,1,empty,1,no
2,W2K3_SRV1_VOL02,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000003,0,1,empty,0,no
3,W2K3_SRV1_VOL03,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000004,0,1,empty,0,no
4,W2K3_SRV1_VOL04,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000005,0,1,empty,0,no
5,W2K3_SRV1_VOL05,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000006,0,1,empty,0,no
6,W2K3_SRV1_VOL06,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000007,0,1,empty,0,no
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,no
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons as opposed to wrapping text over multiple lines. This parameter is often used if
you must get reports during script execution.
If we now add a filter (mdisk_grp_name) to our lsvdisk command, we can reduce the output,
as shown in Example 9-85.
Chapter 9. SAN Volume Controller operations using the command-line interface 549
9.9.1 Viewing clustered system properties
Important changes: The following changes were made since SVC 6.3:
The svcinfo lscluster command was changed to lssystem.
The svctask chcluster command was changed to chsystem, and several optional
parameters were moved to new commands. For example, to change the IP address of
the system, you can now use the chsystemip command. All of the old commands are
maintained for compatibility.
Use the lssystem command to display summary information about the clustered system, as
shown in Example 9-86.
550 Implementing the IBM System Storage SAN Volume Controller V7.4
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 50
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 571.00GB
tier_free_capacity 493.00GB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization IBM
email_machine_address Street
email_machine_city City
email_machine_state CA
email_machine_zip 99999
email_machine_country CA
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask 11111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method none
vdisk_protection_time 60
vdisk_protection_enabled yes
product_name IBM SAN Volume Controller
Use the lssystemstats command to display the most recent values of all node statistics
across all nodes in a clustered system, as shown in Example 9-87.
Chapter 9. SAN Volume Controller operations using the command-line interface 551
drive_io 0 0 110927162859
drive_ms 0 0 110927162859
vdisk_r_mb 0 0 110927162859
vdisk_r_io 0 0 110927162859
vdisk_r_ms 0 0 110927162859
vdisk_w_mb 0 0 110927162859
vdisk_w_io 0 0 110927162859
vdisk_w_ms 0 0 110927162859
mdisk_r_mb 0 0 110927162859
mdisk_r_io 0 0 110927162859
mdisk_r_ms 0 0 110927162859
mdisk_w_mb 0 0 110927162859
mdisk_w_io 0 0 110927162859
mdisk_w_ms 0 0 110927162859
drive_r_mb 0 0 110927162859
drive_r_io 0 0 110927162859
drive_r_ms 0 0 110927162859
drive_w_mb 0 0 110927162859
drive_w_io 0 0 110927162859
drive_w_ms 0 0 110927162859
All command parameters are optional; however, you must specify at least one parameter.
Important: Changing the speed on a running system breaks I/O service to the attached
hosts. Before the fabric speed is changed, stop the I/O from the active hosts and force
these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by
removing drive letters (for Windows host types). You might need to reboot specific hosts to
detect the new fabric speed.
Example 9-88 shows configuring the Network Time Protocol (NTP) IP address.
For more information about how iSCSI works, see Chapter 2, “IBM SAN Volume Controller”
on page 9. In this section, we show how we configured our system for use with iSCSI.
We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to
contain the clustered system IP. When we configured our nodes to be used with iSCSI, we did
not affect our clustered system IP. The clustered system IP is changed, as described in 9.9.2,
“Changing system settings” on page 552.
552 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: You can have more than a one IP address to one physical connection
relationship. We can have a four-to-one relationship (4:1), which consists of two IPv4
addresses plus two IPv6 addresses (four total) to one physical connection per port per
node.
Tip: When you are reconfiguring IP ports, be aware that configured iSCSI connections
must reconnect if changes are made to the IP addresses of the nodes.
Example 9-89 Setting a CHAP secret for the entire clustered system to passw0rd
IBM_2145:ITSO_SVC2:superuser>chsystem -iscsiauthmethod chap -chapsecret passw0rd
In our scenario, our clustered system IP address is 9.64.210.64, which is not affected during
our configuration of the node’s IP addresses.
We start by listing our ports by using the lsportip command (not shown). We see that we
have two ports per node with which to work. Both ports can have two IP addresses that can
be used for iSCSI.
We configure the secondary port in both nodes in our I/O Group, as shown in Example 9-90.
Example 9-90 Configuring the secondary Ethernet port on both SVC nodes
IBM_2145:ITSO_SVC2:superuser>cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask 255.255.255.0 2
IBM_2145:ITSO_SVC2:superuser>cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask 255.255.255.0 2
While both nodes are online, each node is available to iSCSI hosts on the IP address that we
configured. iSCSI failover between nodes is enabled automatically. Therefore, if a node goes
offline for any reason, its partner node in the I/O Group becomes available on the failed
node’s port IP address. This design ensures that hosts can continue to perform I/O. The
lsportip command displays the port IP addresses that are active on each node.
Now, two active system ports are on the configuration node. If the system IP address is
changed, the open command-line shell closes during the processing of the command. You
must reconnect to the new IP address if connected through that port.
If the clustered system IP address is changed, the open command-line shell closes during the
processing of the command and you must reconnect to the new IP address. If this node
cannot rejoin the clustered system, you can start the node in service mode. In this mode, the
node can be accessed as a stand-alone node by using the service IP address.
For more information about the service IP address, see 9.20, “Working with the Service
Assistant menu” on page 651.
Chapter 9. SAN Volume Controller operations using the command-line interface 553
List the IP addresses of the clustered system by issuing the lssystemip command, as shown
in Example 9-91.
Modify the IP address by running the chsystemip command. You can specify a static IP
address or have the system assign a dynamic IP address, as shown in Example 9-92.
Important: If you specify a new system IP address, the existing communication with the
system through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address, but your SSH key
still works.
List the IP service addresses of the clustered system by running the lsserviceip command.
The required tasks to change the IP addresses of the clustered system are complete.
554 Implementing the IBM System Storage SAN Volume Controller V7.4
9.9.6 Setting the clustered system time zone and time
Use the -timezone parameter to specify the numeric ID of the time zone that you want to set.
Run the lstimezones command to list the time zones that are available on the system. This
command displays a list of valid time zone settings.
Tip: If you changed the time zone, you must clear the event log dump directory before you
can view the event log through the web application.
2. To find the time zone code that is associated with your time zone, enter the lstimezones
command, as shown in Example 9-94. A truncated list is provided for this example. If this
setting is correct (for example, 522 UTC), go to Step 4. If the setting is incorrect, continue
with Step 3.
3. Set the time zone by running the settimezone command, as shown in Example 9-95.
Chapter 9. SAN Volume Controller operations using the command-line interface 555
4. Set the system time by running the setclustertime command, as shown in Example 9-96.
The clustered system time zone and time are now set.
Use the startstats command to start the collection of statistics, as shown in Example 9-97.
Specify the interval (1 - 60) in minutes. This command starts statistics collection and gathers
data at 15-minute intervals.
Statistics collection: To verify that the statistics collection is set, display the system
properties again, as shown in Example 9-98.
SVC 6.3: Starting with SVC 6.3, the command svctask stopstats was removed. You
cannot disable the statistics collection.
556 Implementing the IBM System Storage SAN Volume Controller V7.4
9.9.9 Shutting down a clustered system
If all input power to an SVC system is to be removed for more than a few minutes (for
example, if the machine room power is to be shut down for maintenance), it is important to
shut down the clustered system before you remove the power. If the input power is removed
from the uninterruptible power supply units without first shutting down the system and the
uninterruptible power supply units, the uninterruptible power supply units remain operational
and eventually are drained of power.
When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SVC does not permit any I/O activity to be performed to the volumes until the
uninterruptible power supply units are charged enough to enable all of the data on the SVC
nodes to be destaged in a subsequent unexpected power loss. Recharging the uninterruptible
power supply can take up to two hours.
Shutting down the clustered system before input power is removed to the uninterruptible
power supply units prevents the battery power from being drained. It also makes it possible for
I/O activity to be resumed when input power is restored.
This command shuts down the SVC clustered system. All data is flushed to disk before the
power is removed. You lose administrative contact with your system and the PuTTY
application automatically closes.
2. You are presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
Ensure that you stopped all FlashCopy mappings, Metro Mirror (remote copy)
relationships, data migration operations, and forced deletions before you continue. Enter y
in response to this message to run the command. No feedback is displayed. Entering
anything other than y or Y results in the command not running. No feedback is displayed.
Important: Before a clustered system is shut down, ensure that all I/O operations are
stopped that are destined for this system because you lose all access to all volumes
that are provided by this system. Failure to do so can result in failed I/O operations
being reported to the host operating systems.
Begin the process of quiescing all I/O to the system by stopping the applications on the
hosts that are using the volumes that are provided by the clustered system.
We completed the tasks that are required to shut down the system. To shut down the
uninterruptible power supply units, press the power-on button on the front panel of each
uninterruptible power supply unit.
Chapter 9. SAN Volume Controller operations using the command-line interface 557
Restarting the system: To restart the clustered system, you must first restart the
uninterruptible power supply units by pressing the power button on their front panels. Then,
press the power-on button on the service panel of one of the nodes within the system. After
the node is fully booted (for example, displaying Cluster: on line 1 and the cluster name
on line 2 of the panel), you can start the other nodes in the same way.
As soon as all of the nodes are fully booted, you can reestablish administrative contact by
using PuTTY, and your system is fully operational again.
9.10 Nodes
In this section, we describe the tasks that can be performed at an individual node level.
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons (:) as opposed to wrapping text over multiple lines.
558 Implementing the IBM System Storage SAN Volume Controller V7.4
port_speed 2Gb
port_id 50050768011027E2
port_status active
port_speed 2Gb
port_id 50050768012027E2
port_status active
port_speed 2Gb
hardware 8G4
iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.SVC2N1
iscsi_alias
failover_active no
failover_name SVC1N2
failover_iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.svc1n2
failover_iscsi_alias
panel_name 108283
enclosure_id
canister_id
enclosure_serial_number
service_IP_address 10.18.228.101
service_gateway 10.18.228.1
service_subnet_mask 255.255.255.0
service_IP_address_6
service_gateway_6
service_prefix_6
To have a fully functional SVC system, you must add a second node to the configuration. To
add a node to a clustered system, complete the following steps to gather the necessary
information:
1. Before you can add a node, you must know which unconfigured nodes are available as
candidates. Issue the lsnodecandidate command, as shown in Example 9-102.
2. You must specify to which I/O Group you are adding the node. If you enter the lsnode
command, you can identify the I/O Group ID of the group to which you are adding your
node, as shown in Example 9-103.
Tip: The node that you want to add must have a separate uninterruptible power supply
unit serial number from the uninterruptible power supply unit on the first node.
IBM_2145:ITSO_SVC2:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS
_unique_id,hardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,
enclosure_serial_number
Chapter 9. SAN Volume Controller operations using the command-line interface 559
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,i
qn.1986-03.com.ibm:2145.itsosvc1.svc1n3,,104643,,,
3. Now that you know the available nodes, use the addnode command to add the node to the
SVC clustered system configuration, as shown in Example 9-104.
This command adds the candidate node with the wwnodename of 50050768010037E5 to
the I/O Group called io_grp1.
The -wwnodename parameter (50050768010037E5) was used. However, you can also use the
-panelname parameter (104643) instead, as shown in Example 9-105. If you are standing in
front of the node, it is easier to read the panel name than it is to get the worldwide node
name (WWNN).
The optional -name parameter (SVC1N3) also was used. If you do not provide the -name
parameter, the SVC automatically generates the name nodex (where x is the ID sequence
number that is assigned internally by the SVC).
Name: If you want to provide a name, you can use letters A - Z and a - z, numbers 0 - 9,
the dash (-), and the underscore (_). The name can be 1 - 63 characters. However, the
name cannot start with a number, dash, or the word “node” because this prefix is
reserved for SVC assignment only.
4. If the addnode command returns no information, your second node is powered on, the
zones are correctly defined, and the preexisting system configuration data can be stored
in the node. If you are sure that this node is not part of another active SVC system, you
can use the service panel to delete the existing system information. After this action is
complete, reissue the lsnodecandidate command and you see that the node is listed.
Name: The chnode command specifies the new name first. You can use letters A - Z and
a - z, numbers 0 - 9, the dash (-), and the underscore (_). The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or the word “node”
because this prefix is reserved for SVC assignment only.
560 Implementing the IBM System Storage SAN Volume Controller V7.4
9.10.4 Deleting a node
Use the rmnode command to remove a node from the SVC clustered system configuration, as
shown in Example 9-107.
Because SVC1N2 also was the configuration node, the SVC transfers the configuration node
responsibilities to a surviving node within the I/O Group. Unfortunately, the PuTTY session
cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses
communication and closes automatically.
We must restart the PuTTY application to establish a secure session with the new
configuration node.
Important: If this node is the last node in an I/O Group and volumes are still assigned to
the I/O Group, the node is not deleted from the clustered system.
If this node is the last node in the system and the I/O Group has no remaining volumes, the
clustered system is destroyed and all virtualization information is lost. Any data that is still
required must be backed up or migrated before the system is destroyed.
Use the stopcluster -node command to shut down a single node, as shown in
Example 9-108.
This command shuts down node SVC1N3 in a graceful manner. When this node is shut down,
the other node in the I/O Group destages the contents of its cache and enters write-through
mode until the node is powered up and rejoins the clustered system.
Important: You do not need to stop FlashCopy mappings, remote copy relationships, and
data migration operations. The other node handles these activities, but be aware that the
system has a single point of failure now.
If this node is the last node in an I/O Group, all access to the volumes in the I/O Group is lost.
Verify that you want to shut down this node before this command is run. You must specify the
-force flag.
By reissuing the lsnode command (as shown in Example 9-109 on page 562), we can see
that the node is now offline.
Chapter 9. SAN Volume Controller operations using the command-line interface 561
Example 9-109 lsnode command
IBM_2145:ITSO_SVC2:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h
ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number
1,SVC2N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.SVC2N1,,108283,,,
2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n2,,110711,,,
3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n4,,110775,,,
4,SVC1N3,1000739007,50050768010037E5,offline,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n3,,104643,,,
IBM_2145:ITSO_SVC2:superuser>lsnode SVC1N3
CMMVC5782E The object specified is offline.
Restart: To restart the node manually, press the power-on button that is on the service
panel of the node.
We completed the tasks that are required to view, add, delete, rename, and shut down a node
within an SVC environment.
In our example, the SVC predefines five I/O Groups. In a four-node clustered system (similar
to our example), only two I/O Groups are in use. The other I/O Groups (io_grp2 and io_grp3)
are for a six-node or eight-node clustered system.
562 Implementing the IBM System Storage SAN Volume Controller V7.4
The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group
that normally owns them experience multiple failures. By using this design, the volumes can
be moved to the recovery I/O Group and then into a working I/O Group. While temporarily
assigned to the recovery I/O Group, I/O access is not possible.
To see whether the renaming was successful, run the lsiogrp command again to see the
change.
Use the rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in
Example 9-113.
Chapter 9. SAN Volume Controller operations using the command-line interface 563
The rmhostiogrp command uses the following parameters:
-iogrp iogrp_list -iogrpall
Specify a list of one or more I/O Groups that must be unmapped to the host. This
parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies
that all of the I/O Groups must be unmapped to the specified host. This parameter is
mutually exclusive with -iogrp.
-force
If the removal of a host to I/O Group mapping results in the loss of the volume to host
mappings, the command fails if the -force flag is not used. However, the -force flag
overrides this behavior and forces the deletion of the host to I/O Group mapping.
host_id_or_name
Identify the host by the ID or name to which the I/O Groups must be unmapped.
To list all of the host objects that are mapped to the specified I/O Group, use the lsiogrphost
command, as shown in Example 9-115.
564 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-116 lsusergrp command
IBM_2145:ITSO_SVC2:superuser>lsusergrp
id name role remote
0 Securitysuperuser Securitysuperuser no
1 superuseristrator superuseristrator no
2 CopyOperator CopyOperator no
3 Service Service no
4 Monitor Monitor no
Example 9-117 shows a simple example of creating a user. User John is added to the user
group Monitor with the password m0nitor.
Example 9-117 mkuser creates a user called John with password m0nitor
IBM_2145:ITSO_SVC2:superuser>mkuser -name John -usergrp Monitor -password m0nitor
User, id [6], successfully created
Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server.
The user groups include a defined authority role, as listed in Table 9-2.
Copy operator All display commands and the Controls all of the copy
following commands: functionality of the cluster
prestartfcconsistgrp
startfcconsistgrp
stopfcconsistgrp
chfcconsistgrp
prestartfcmap
startfcmap
stopfcmap
chfcmap
startrcconsistgrp
stoprcconsistgrp
switchrcconsistgrp
chrcconsistgrp
startrcrelationship
stoprcrelationship
switchrcrelationship
chrcrelationship
chpartnership
Chapter 9. SAN Volume Controller operations using the command-line interface 565
User group Role User
Monitor All display commands and the Need view access only
following commands:
finderr
dumperrlog
dumpinternallog
chcurrentuser
svcconfig: backup
As of SVC 6.3, you can connect to the clustered system by using the same user name with
which you log in to an SVC GUI.
To view the user roles on your system, use the lsusergrp command, as shown in
Example 9-118.
To view the defined users and the user groups to which they belong, use the lsuser
command, as shown in Example 9-119.
566 Implementing the IBM System Storage SAN Volume Controller V7.4
1,superuser,yes,yes,no,0,Securitysuperuser
2,Torben,yes,no,no,0,Securitysuperuser
3,Massimo,yes,no,no,1,superuseristrator
4,Christian,yes,no,no,1,superuseristrator
5,Alejandro,yes,no,no,1,superuseristrator
6,John,yes,no,no,4,Monitor
By using the chuser command, you can modify a user. You can rename a user, assign a new
password (if you are logged on with administrative privileges), and move a user from one user
group to another user group. However, be aware that a member can be a member of only one
group at a time.
The SVC console performs actions by issuing Common Information Model (CIM) commands
to the CIM object manager (CIMOM), which then runs the CLI programs.
Actions that are performed by using the native GUI and the SVC Console are recorded in the
audit log.
The audit log contains approximately 1 MB of data, which can contain about 6,000
average-length commands. When this log is full, the system copies it to a new file in the
/dumps/audit directory on the configuration node and resets the in-memory audit log.
To display entries from the audit log, use the catauditlog -first 5 command to return a list
of five in-memory audit log entries, as shown in Example 9-120.
Chapter 9. SAN Volume Controller operations using the command-line interface 567
462 110928160755 superuser 10.18.228.173 0 1 svctask mkvdisk
-iogrp 0 -mdiskgrp 3 -size 10 -unit gb -vtype striped -autoexpand -grainsize 32 -rsize 20%
463 110928160817 superuser 10.18.228.173 0 svctask rmvdisk
1
If you must dump the contents of the in-memory audit log to a file on the current configuration
node, use the dumpauditlog command. This command does not provide any feedback; it
provides the prompt only. To obtain a list of the audit log dumps, use the lsdumps command,
as shown in Example 9-121.
Scenario description
We use the scenario that is described in this section in both the CLI section and the GUI
section. In this scenario, we want to FlashCopy the following volumes:
DB_Source: Database files
Log_Source: Database log files
App_Source: Application files
In our scenario, the application files are independent of the database; therefore, we create a
single FlashCopy mapping for App_Source. We make two FlashCopy targets for DB_Source
and Log_Source and, therefore, two Consistency Groups. The scenario is shown in Figure 9-6
on page 569.
568 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 9-6 FlashCopy scenario
Chapter 9. SAN Volume Controller operations using the command-line interface 569
9.13.3 Creating a FlashCopy Consistency Group
Use the command mkfcconsistgrp to create a new FlashCopy Consistency Group. The ID of
the new group is returned. If you created several FlashCopy mappings for a group of volumes
that contain elements of data for the same application, it might be convenient to assign these
mappings to a single FlashCopy Consistency Group. Then, you can issue a single prepare or
start command for the whole group so that, for example, all files for a particular database are
copied at the same time.
In Example 9-122, the FCCG1 and FCCG2 Consistency Groups are created to hold the
FlashCopy maps of DB and Log. This step is important for FlashCopy on database
applications because it helps to maintain data integrity during FlashCopy.
In Example 9-123, we checked the status of the Consistency Groups. Each Consistency
Group has a status of empty.
If you want to change the name of a Consistency Group, you can use the chfcconsistgrp
command. Type chfcconsistgrp -h for help with this command.
When this command is run, a FlashCopy mapping logical object is created. This mapping
persists until it is deleted. The mapping specifies the source and destination volumes. The
destination must be identical in size to the source or the mapping fails. Issue the lsvdisk
-bytes command to find the exact size of the source volume for which you want to create a
target disk of the same size.
In a single mapping, source and destination cannot be on the same volume. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a Consistency Group. These groups of mappings can be triggered at
the same time, which enables multiple volumes to be copied at the same time and creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files are on separate disks.
If no Consistency Group is defined, the mapping is assigned to the default group 0, which is a
special group that cannot be started as a whole. Mappings in this group can be started only
on an individual basis.
570 Implementing the IBM System Storage SAN Volume Controller V7.4
The background copy rate specifies the priority that must be given to completing the copy. If 0
is specified, the copy does not proceed in the background. The default is 50.
Tip: You can use a parameter to delete FlashCopy mappings automatically after the
background copy is completed (when the mapping gets to the idle_or_copied state). Use
the following command:
mkfcmap -autodelete
This command does not delete mappings in cascade with dependent mappings because it
cannot get to the idle_or_copied state in this situation.
Example 9-124 shows the creation of the first FlashCopy mapping for DB_Source, Log_Source,
and App_Source.
Example 9-124 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO_SVC3:superuser>mkfcmap -source DB_Source -target DB_Target1 -name
DB_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [0], successfully created
Example 9-125 shows the command to create a second FlashCopy mapping for volume
DB_Source and volume Log_Source.
Example 9-126 shows the result of these FlashCopy mappings. The status of the mapping is
idle_or_copied.
Chapter 9. SAN Volume Controller operations using the command-line interface 571
2 App_Map1 9 App_Source 10 App_Target1
idle_or_copied 0 50 100 off
no no
3 DB_Map2 3 DB_Source 5 DB_Target2 2
FCCG2 idle_or_copied 0 50 100 off
no no
4 Log_Map2 6 Log_Source 8 Log_Target2 2
FCCG2 idle_or_copied 0 50 100 off
no no
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied
If you want to change the FlashCopy mapping, you can use the chfcmap command. Enter
chfcmap -h to get help with this command.
When the prestartfcmap command is run, the mapping enters the Preparing state. After the
preparation is complete, it changes to the Prepared state. At this point, the mapping is ready
for triggering. Preparing and the subsequent triggering are performed on a Consistency
Group basis.
Only mappings that belong to Consistency Group 0 can be prepared on their own because
Consistency Group 0 is a special group that contains the FlashCopy mappings that do not
belong to any Consistency Group. A FlashCopy must be prepared before it can be triggered.
In our scenario, App_Map1 is not in a Consistency Group. In Example 9-127, we show how to
start the preparation for App_Map1.
IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status prepared
progress 0
copy_rate 50
start_time
dependent_mappings 0
autodelete off
clean_progress 0
572 Implementing the IBM System Storage SAN Volume Controller V7.4
clean_rate 50
incremental off
difference 0
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
Another option is to add the -prep parameter to the startfcmap command, which prepares
the mapping and then starts the FlashCopy.
In Example 9-127 on page 572, we also show how to check the status of the current
FlashCopy mapping. The status of App_Map1 is prepared.
When you assign several mappings to a FlashCopy Consistency Group, you must issue only
a single prepare command for the whole group to prepare all of the mappings at one time.
Example 9-128 shows how we prepare the Consistency Groups for DB and Log and check the
result. After the command runs all of the FlashCopy maps that we have, all of the maps and
Consistency Groups are in the prepared status. Now, we are ready to start the FlashCopy.
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp FCCG1
id 1
name FCCG1
status prepared
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 prepared
2 FCCG2 prepared
Chapter 9. SAN Volume Controller operations using the command-line interface 573
9.13.7 Starting (triggering) FlashCopy mappings
The startfcmap command is used to start a single FlashCopy mapping. When a single
FlashCopy mapping is started, a point-in-time copy of the source volume is created on the
target volume.
When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0 (NOCOPY), only data that is then updated on the source is copied to the destination. We
suggest that you use this scenario as a backup copy while the mapping exists in the Copying
state. If the copy is stopped, the destination is unusable.
If you want a duplicate copy of the source at the destination, set the background copy rate
greater than 0. By setting this rate, the system copies all of the data (even unchanged data) to
the destination and eventually reaches the idle_or_copied state. After this data is copied, you
can delete the mapping and have a usable point-in-time copy of the source at the destination.
In Example 9-129, App_Map1 changes to the copying status after the FlashCopy is started.
574 Implementing the IBM System Storage SAN Volume Controller V7.4
difference 0
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
Alternatively, you can also query the copy progress by using the lsfcmap command. As
shown in Example 9-131, DB_Map1 returns information that the background copy is 23%
completed and Log_Map1 returns information that the background copy is 41% completed.
DB_Map2 returns information that the background copy is 5% completed and Log_Map2 returns
information that the background copy is 4% completed.
Chapter 9. SAN Volume Controller operations using the command-line interface 575
id progress
4 4
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress DB_Map2
id progress
3 5
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress App_Map1
id progress
2 10
When the background copy completes, the FlashCopy mapping enters the idle_or_copied
state. When all of the FlashCopy mappings in a Consistency Group enter this status, the
Consistency Group is at the idle_or_copied status.
When in this state, the FlashCopy mapping can be deleted and the target disk can be used
independently if, for example, another target disk is to be used for the next FlashCopy of the
particular source volume.
Tip: If you want to stop a mapping or group in a Multiple Target FlashCopy environment,
consider whether you want to keep any of the dependent mappings. If you do not want to
keep these mappings, run the stop command with the -force parameter. This command
stops all of the dependent maps and negates the need for the stopping copy process to
run.
When a FlashCopy mapping is stopped, the target volume becomes invalid. The target
volume is set offline by the SVC. The FlashCopy mapping must be prepared again or
retriggered to bring the target volume online again.
Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by the SVC if the mapping
is in the copying state and progress=100.
Example 9-132 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 changed
to idle_or_copied.
IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status idle_or_copied
576 Implementing the IBM System Storage SAN Volume Controller V7.4
progress 100
copy_rate 50
start_time 110929113407
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use or when you want to modify the FlashCopy Consistency Group. When a Consistency
Group is stopped, the target volume might become invalid and be set offline by the SVC,
depending on the state of the mapping.
As shown in Example 9-133, we stop the FCCG1 and FCCG2 Consistency Groups. The status of
the two Consistency Groups changed to stopped. Most of the FlashCopy mapping
relationships now have the status of stopped. As you can see, several of them completed the
copy operation and are now in a status of idle_or_copied.
IBM_2145:ITSO_SVC3:superuser>stopfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied
IBM_2145:ITSO_SVC3:superuser>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_
name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,res
toring,start_time,rc_controlled
0,DB_Map1,3,DB_Source,4,DB_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,110929113806,
no
1,Log_Map1,6,Log_Source,7,Log_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
2,App_Map1,9,App_Source,10,App_Target1,,,idle_or_copied,100,50,100,off,,,no,110929113407,no
3,DB_Map2,3,DB_Source,5,DB_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,110929113806,
no
Chapter 9. SAN Volume Controller operations using the command-line interface 577
4,Log_Map2,6,Log_Source,8,Log_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
Deleting a mapping deletes only the logical relationship between the two volumes. However,
when issued on an active FlashCopy mapping that uses the -force flag, the delete renders
the data on the FlashCopy mapping target volume as inconsistent.
Tip: If you want to use the target volume as a normal volume, monitor the background copy
progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.
Another option is to set the -autodelete option when the FlashCopy mapping is created.
If you also want to delete all of the mappings in the Consistency Group, first delete the
mappings and then delete the Consistency Group.
As shown in Example 9-135, we delete all of the maps and Consistency Groups and then
check the result.
578 Implementing the IBM System Storage SAN Volume Controller V7.4
9.13.14 Migrating a volume to a thin-provisioned volume
Complete the following steps to migrate a volume to a thin-provisioned volume:
1. Create a thin-provisioned, space-efficient target volume with the same size as the volume
that you want to migrate.
Example 9-136 shows the details of a volume with ID 11. It was created as a
thin-provisioned volume with the same size as the App_Source volume.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 221.17MB
free_capacity 220.77MB
overallocation 4629
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
Chapter 9. SAN Volume Controller operations using the command-line interface 579
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 221.17MB
2. Define a FlashCopy mapping in which the non-thin-provisioned volume is the source and
the thin-provisioned volume is the target. Specify a copy rate as high as possible and
activate the -autodelete option for the mapping, as shown in Example 9-137.
3. Run the prestartfcmap command and the lsfcmap MigrtoThinProv command, as shown
in Example 9-138.
IBM_2145:ITSO_SVC3:superuser>prestartfcmap MigrtoThinProv
IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status prepared
progress 0
copy_rate 100
580 Implementing the IBM System Storage SAN Volume Controller V7.4
start_time
dependent_mappings 0
autodelete on
clean_progress 0
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status copying
progress 67
copy_rate 100
start_time 110929135848
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
Chapter 9. SAN Volume Controller operations using the command-line interface 581
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress MigrtoThinProv
CMMVC5804E The action failed because an object that was specified in the command does
not exist.
IBM_2145:ITSO_SVC3:superuser>
An independent copy of the source volume (App_Source) was created. The migration
completes, as shown in Example 9-142.
IBM_2145:ITSO_SVC3:superuser>lsvdisk App_Source
id 9
name App_Source
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE000000000000009
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
582 Implementing the IBM System Storage SAN Volume Controller V7.4
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB
Real size: Independently of what you defined as the real size of the target thin-provisioned
volume, the real size is at least the capacity of the source volume.
To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same
scenario.
In Example 9-143, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1 is a reverse
FlashCopy mapping. We also have a cascade FCMAP_2 where its source is FCMAP_1’s target
volume, and its target is a separate volume that is named Volume_FC_T1.
In our example, we started the FCMAP_1 and later FCMAP_2 after the environment was created.
CMMVC6298E The command failed because a target VDisk has dependent FlashCopy
mappings.
When a reverse FlashCopy mapping is started, you must use the -restore option to indicate
that you want to overwrite the data on the source disk of the forward mapping.
Chapter 9. SAN Volume Controller operations using the command-line interface 583
IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name
FCMAP_2 -copyrate 50
FlashCopy Mapping, id [2], successfully created
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
idle_or_copied 0 50 100 off 1 FCMAP_rev_1
no no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
idle_or_copied 0 50 100 off
no no
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 0
50 100 off 1 FCMAP_rev_1 no
no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 4 50 100 off
no 110929143739 no
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
copying 43 100 56 off 1 FCMAP_rev_1 no
110929151911 no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
copying 56 100 43 off 0 FCMAP_1 yes
110929152030 no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 37 100 100 off no
110929151926 no
As you can see in Example 9-143 on page 583, FCMAP_rev_1 shows a restoring value of yes
while the FlashCopy mapping is copying. After it finishes copying, the restoring value field is
changed to no.
584 Implementing the IBM System Storage SAN Volume Controller V7.4
9.13.16 Split-stopping of FlashCopy maps
The stopfcmap command has a -split option. By using this option, the source target of a
map (which is 100% complete) can be removed from the head of a cascade when the map is
stopped.
For example, if we have four volumes in a cascade (A → B → C → D), and the map A → B is
100% complete, the use of the stopfcmap -split mapAB command results in mapAB becoming
idle_copied and the remaining cascade becoming B → C → D.
Without the -split option, volume A remains at the head of the cascade (A → C → D).
Consider the following sequence of steps:
1. The user takes a backup that uses the mapping A → B. A is the production volume and B
is a backup.
2. At a later point, the user experiences corruption on A and so reverses the mapping to
B → A.
3. The user then takes another backup from the production disk A, which results in the
cascade B → A → C.
Stopping A → B without the -split option results in the cascade B → C. The backup disk B is
now at the head of this cascade.
When the user next wants to take a backup to B, the user can still start mapping A → B (by
using the -restore flag), but the user cannot then reverse the mapping to A (B → A or C →
A).
Stopping A → B with the -split option results in the cascade A → C. This action does not
result in the same problem because the production disk A is at the head of the cascade
instead of the backup disk B.
Intercluster example: The example in this section is for intercluster operations only.
If you want to set up intracluster operations, we highlight the parts of the following
procedure that you do not need to perform.
In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC
system ITSO_SVC2 primary site and the SVC system ITSO_SVC4 at the secondary site.
Table 9-3 shows the details of the volumes.
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a
CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them.
Chapter 9. SAN Volume Controller operations using the command-line interface 585
Because application files are independent of the database in this scenario, a stand-alone
Metro Mirror relationship is created for the MM_App_Pri volume. Figure 9-7 shows the Metro
Mirror setup.
586 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Create the Metro Mirror relationship for MM_App_Pri with the following settings:
– Master: MM_App_Pri
– Auxiliary: MM_App_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: MMREL3
Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
the next step; instead, go to 9.14.3, “Creating a Metro Mirror Consistency Group” on
page 590.
Pre-verification
To verify that both systems can communicate with each other, use the
lspartnershipcandidate command.
As shown in Example 9-144, ITSO_SVC4 is an eligible SVC system candidate at ITSO_SVC2 for
the SVC system partnership, and vice versa. Therefore, both systems communicate with
each other.
IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006AC03A42 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2
000002006BE04FC4 no ITSO_SVC2
Example 9-145 on page 588 shows the output of the lspartnership and lssystem
commands before setting up the Metro Mirror relationship. We show them so that you can
compare with the same relationship after setting up the Metro Mirror relationship.
As of SVC 6.3, you can create a partnership between the SVC system and the IBM Storwize
V7000 system. Be aware that to create this partnership, you must change the layer
parameter on the IBM Storwize V7000 system. It must be changed from storage to
replication with the chsystem command.
This parameter cannot be changed on the SVC system. It is fixed to the value of appliance,
as shown in Example 9-145 on page 588.
Chapter 9. SAN Volume Controller operations using the command-line interface 587
Example 9-145 Pre-verification of system configuration
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
IBM_2145:ITSO_SVC2:superuser>lssystem
id 000002006BE04FC4
name ITSO_SVC2
location local
partnership
bandwidth
total_mdisk_capacity 766.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 0.00MB
total_free_space 766.5GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 1.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
588 Implementing the IBM System Storage SAN Volume Controller V7.4
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 766.50GB
has_nas_key no
layer appliance
IBM_2145:ITSO_SVC4:superuser>lssystem
id 0000020061C06FCA
name ITSO_SVC4
location local
partnership
bandwidth
total_mdisk_capacity 768.0GB
space_in_mdisk_grps 0
space_allocated_to_vdisks 0.00MB
total_free_space 768.0GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 0.00MB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.84:443
id_alias 0000020061C06FCA
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
Chapter 9. SAN Volume Controller operations using the command-line interface 589
has_nas_key no
layer appliance
To check the status of the newly created partnership, run the lspartnership command. Also,
the new partnership is only partially configured. It remains partially configured until the Metro
Mirror relationship is created on the other node.
Example 9-146 Creating the partnership from ITSO_SVC2 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
In Example 9-147, the partnership is created between ITSO_SVC4 back to ITSO_SVC2, which
specifies the bandwidth that is to be used for a background copy of 50 MBps.
After the partnership is created, verify that the partnership is fully configured on both systems
by reissuing the lspartnership command.
Example 9-147 Creating the partnership from ITSO_SVC4 to ITSO_SVC2 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 50
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC2 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none
590 Implementing the IBM System Storage SAN Volume Controller V7.4
9.14.4 Creating the Metro Mirror relationships
In Example 9-149, we create the Metro Mirror relationships MMREL1 and MMREL2 for MM_DB_Pri
and MM_DBLog_Pri. Also, we make them members of the Metro Mirror Consistency Group
CG_W2K3_MM. We use the lsvdisk command to list all of the volumes in the ITSO_SVC2 system.
To verify the new Metro Mirror relationships, list them with the lsrcrelationship command.
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 MMREL1 000002006BE04FC4 ITSO_SVC2 0 MM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 MM_DB_Sec master 0 CG_W2K3_MM
inconsistent_stopped 50 0 metro none
3 MMREL2 000002006BE04FC4 ITSO_SVC2 3 MM_Log_Pri
0000020061C06FCA ITSO_SVC4 3 MM_Log_Sec master 0
CG_W2K3_MM inconsistent_stopped 50 0 metro none
Chapter 9. SAN Volume Controller operations using the command-line interface 591
9.14.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri
In Example 9-150, we create the stand-alone Metro Mirror relationship MMREL3 for MM_App_Pri.
After the stand-alone Metro Mirror relationship is created, we check the status of this Metro
Mirror relationship.
The state of MMREL3 is consistent_stopped. MMREL3 is in this state because it was created with
the -sync option. The -sync option indicates that the secondary (auxiliary) volume is
synchronized with the primary (master) volume. Initial background synchronization is skipped
when this option is used, even though the volumes are not synchronized in this scenario.
We want to show the option of pre-synchronized master and auxiliary volumes before the
relationship is set up. We created the relationship for MM_App_Sec by using the -sync option.
Tip: The -sync option is used only when the target volume mirrored all of the data from the
source volume. By using this option, there is no initial background copy between the
primary volume and the secondary volume.
MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship 2
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
592 Implementing the IBM System Storage SAN Volume Controller V7.4
9.14.6 Starting Metro Mirror
Now that the Metro Mirror Consistency Group and relationships are in place, we are ready to
use Metro Mirror relationships in our environment.
When Metro Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy for a data set if a failure occurs that affects the production site.
In this section, we show how to stop and start stand-alone Metro Mirror relationships and
Consistency Groups.
Chapter 9. SAN Volume Controller operations using the command-line interface 593
Example 9-152 Starting the Metro Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC2 0000020061C06FCA ITSO_SVC4
master inconsistent_copying 2 metro none
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Metro Mirror Consistency Groups or relationships change their state.
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL2
id 3
name MMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 3
master_vdisk_name MM_Log_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
594 Implementing the IBM System Storage SAN Volume Controller V7.4
aux_vdisk_id 3
aux_vdisk_name MM_Log_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 82
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
When all Metro Mirror relationships complete the background copy, the Consistency Group
enters the consistent_synchronized state, as shown in Example 9-154.
Chapter 9. SAN Volume Controller operations using the command-line interface 595
Example 9-155 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary
IBM_2145:ITSO_SVC2:superuser>stoprcrelationship -access MMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
596 Implementing the IBM System Storage SAN Volume Controller V7.4
If we want to enable access (write I/O) to the secondary volume later, we reissue the
stoprcconsistgrp command and specify the -access flag. The Consistency Group changes
to the idling state, as shown in Example 9-157.
Example 9-157 Stopping a Metro Mirror Consistency Group and enabling access to the secondary
IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp -access CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
Chapter 9. SAN Volume Controller operations using the command-line interface 597
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
If any updates were performed on the master or the auxiliary volume in any of the Metro
Mirror relationships in the Consistency Group, the consistency is compromised. Therefore, we
must use the -force flag to start a relationship. If the -force flag is not used, the command
fails.
In Example 9-159, we change the copy direction by specifying the auxiliary volumes to
become the primaries.
Example 9-159 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp -force -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
598 Implementing the IBM System Storage SAN Volume Controller V7.4
9.14.15 Switching the copy direction for a Metro Mirror relationship
When a Metro Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the switchrcrelationship command, which
specifies the primary volume. If the specified volume is a primary when you issue this
command, the command has no effect.
In Example 9-160, we change the copy direction for the stand-alone Metro Mirror relationship
by specifying the auxiliary volume to become the primary volume.
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from the primary to the secondary because all of the I/O is
inhibited to that volume when it becomes the secondary. Therefore, careful planning is
required before the switchrcrelationship command is used.
Example 9-160 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC2:superuser>switchrcrelationship -primary aux MMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary aux
consistency_group_id
consistency_group_name
Chapter 9. SAN Volume Controller operations using the command-line interface 599
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.14.16 Switching the copy direction for a Metro Mirror Consistency Group
When a Metro Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the Consistency Group by using the switchrcconsistgrp
command and specifying the primary volume.
If the specified volume is already a primary volume when you issue this command, the
command has no effect.
In Example 9-161, we change the copy direction for the Metro Mirror Consistency Group by
specifying the auxiliary volume to become the primary volume.
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all of the I/O is inhibited when
that volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.
Example 9-161 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
IBM_2145:ITSO_SVC2:superuser>switchrcconsistgrp -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
600 Implementing the IBM System Storage SAN Volume Controller V7.4
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
In this section, we describe how to configure the SVC system partnership for each
configuration.
Important: To have a supported and working configuration, all SVC systems must be at
level 5.1 or higher.
In our scenarios, we configure the SVC partnership by referring to the clustered systems as
A, B, C, and D, as shown in the following examples:
ITSO_SVC2 = A
ITSO_SVC2 = B
ITSO_SVC3 = C
ITSO_SVC4 = D
Example 9-162 shows the available systems for a partnership by using the
lsclustercandidate command on each system.
IBM_2145:ITSO_SVC2:superuser>lspartnershipcandidate
Chapter 9. SAN Volume Controller operations using the command-line interface 601
id configured name
0000020061C06FCA no ITSO_SVC4
000002006BE04FC4 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
IBM_2145:ITSO_SVC3:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC2
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC2
IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC2
Example 9-163 shows the sequence of mkpartnership commands that are run to create a
star configuration.
602 Implementing the IBM System Storage SAN Volume Controller V7.4
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC2
From ITSO_SVC2
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
000002006AC03A42 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote fully_configured 50
From ITSO_SVC2
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
From ITSO_SVC3
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 50
From ITSO_SVC4
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 50
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
Triangle configuration
Figure 9-9 shows the triangle configuration.
Chapter 9. SAN Volume Controller operations using the command-line interface 603
Example 9-164 shows the sequence of mkpartnership commands that are run to create a
triangle configuration.
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
Example 9-165 on page 605 shows the sequence of mkpartnership commands that are run
to create a fully connected configuration.
604 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-165 Creating a fully connected configuration
From ITSO_SVC2 to ITSO_SVC1, ITSO_SVC3 and ITSO_SVC4
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
Chapter 9. SAN Volume Controller operations using the command-line interface 605
Daisy-chain configuration
Figure 9-11 shows the daisy-chain configuration.
Example 9-166 shows the sequence of mkpartnership commands that are run to create a
daisy-chain configuration.
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
606 Implementing the IBM System Storage SAN Volume Controller V7.4
9.15 Global Mirror operation
In the following scenario, we set up an intercluster Global Mirror relationship between the
SVC system ITSO_SVC2 at the primary site and the SVC system ITSO_SVC4 at the secondary
site.
Intercluster example: This example is for an intercluster Global Mirror operation only. If
you want to set up an intracluster operation, we highlight the steps in the following
procedure that you do not need to perform.
Table 9-4 Details of the volumes for the Global Mirror relationship scenario
Content of volume Volumes at primary site Volumes at secondary site
Chapter 9. SAN Volume Controller operations using the command-line interface 607
9.15.1 Setting up Global Mirror
In this section, we assume that the source and target volumes were created and that the ISLs
and zoning are in place to enable the SVC systems to communicate.
Intracluster Global Mirror: If you are creating an intracluster Global Mirror, do not perform
the next step. Instead, go to 9.15.3, “Changing link tolerance and system delay simulation”
on page 610.
Pre-verification
To verify that both clustered systems can communicate with each other, use the
lspartnershipcandidate command. Example 9-167 confirms that our clustered systems can
communicate because ITSO_SVC4 is an eligible SVC system candidate to ITSO_SVC2 for the
SVC system partnership, and vice versa. Therefore, both systems can communicate with
each other.
608 Implementing the IBM System Storage SAN Volume Controller V7.4
id configured name
000002006BE04FC4 no ITSO_SVC2
In Example 9-168, we show the output of the lspartnership command before we set up the
SVC systems’ partnership for Global Mirror. We show this output for comparison after we set
up the SVC partnership.
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
Example 9-169 Creating the partnership from ITSO_SVC2 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 100 ITSO_SVC4
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 100
In Example 9-170, we create the partnership from ITSO_SVC4 back to ITSO_SVC2 and specify a
100 MBps bandwidth to use for the background copy. After the partnership is created, verify
that the partnership is fully configured by reissuing the lspartnership command.
Example 9-170 Creating the partnership from ITSO_SVC4 to ITSO_SVC2 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 100 ITSO_SVC2
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 100
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
0000020061C06FCA ITSO_SVC4 remote fully_configured 100
Chapter 9. SAN Volume Controller operations using the command-line interface 609
9.15.3 Changing link tolerance and system delay simulation
The -gmlinktolerance parameter defines the sensitivity of the SVC to inter-link overload
conditions. The value is the number of seconds of continuous link difficulties that is tolerated
before the SVC stops the remote copy relationships to prevent affecting host I/O at the
primary site. To change the value, use the following command:
chsystem -gmlinktolerance link_tolerance
Important: We strongly suggest that you use the default value. If the link is overloaded for
a period (which affects host I/O at the primary site), the relationships are stopped to protect
those hosts.
To check the current settings for the delay simulation, use the following command:
lssystem
In Example 9-171, we show the modification of the delay simulation value and a change of
the Global Mirror link tolerance parameters. We also show the changed values of the Global
Mirror link tolerance and delay simulation parameters.
610 Implementing the IBM System Storage SAN Volume Controller V7.4
total_free_space 836.5GB
total_vdiskcopy_capacity 30.00GB
total_used_capacity 30.00GB
total_overallocation 3
total_vdisk_capacity 30.00GB
total_allocated_extent_capacity 31.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 20
gm_intra_cluster_delay_simulation 40
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 736.50GB
has_nas_key no
layer appliance
Chapter 9. SAN Volume Controller operations using the command-line interface 611
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_GM 000002006BE04FC4 ITSO_SVC2 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none
We use the lsvdisk command to list all of the volumes in the ITSO_SVC2 system. Then, we
use the lsrcrelationshipcandidate command to show the possible candidate volumes for
GM_DB_Pri in ITSO_SVC4.
After checking all of these conditions, we use the mkrcrelationship command to create the
Global Mirror relationship. To verify the new Global Mirror relationships, we list them by using
the lsrcrelationship command.
612 Implementing the IBM System Storage SAN Volume Controller V7.4
0 GMREL1 000002006BE04FC4 ITSO_SVC2 0 GM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 GM_DB_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none
1 GMREL2 000002006BE04FC4 ITSO_SVC2 1 GM_DBLog_Pri 0000020061C06FCA
ITSO_SVC4 1 GM_DBLog_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none
The status of GMREL3 is consistent_stopped because it was created with the -sync option. The
-sync option indicates that the secondary (auxiliary) volume is synchronized with the primary
(master) volume. The initial background synchronization is skipped when this option is used.
GMREL1 and GMREL2 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress:copy_type:cycling_mode
0:GMREL1:000002006BE04FC4:ITSO_SVC2:0:GM_DB_Pri:0000020061C06FCA:ITSO_SVC4:0:GM_DB_Sec:master:0:CG_W2K3_GM:
inconsistent_copying:50:73:global:none
1:GMREL2:000002006BE04FC4:ITSO_SVC2:1:GM_DBLog_Pri:0000020061C06FCA:ITSO_SVC4:1:GM_DBLog_Sec:master:0:CG_W2
K3_GM:inconsistent_copying:50:75:global:none
2:GMREL3:000002006BE04FC4:ITSO_SVC2:2:GM_App_Pri:0000020061C06FCA:ITSO_SVC4:2:GM_App_Sec:master:::consisten
t_stopped:50:100:global:none
When Global Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy if a hardware failure occurs that affects the SAN at the
production site.
In this section, we show how to start the stand-alone Global Mirror relationships and the
Consistency Group.
Chapter 9. SAN Volume Controller operations using the command-line interface 613
9.15.8 Starting a stand-alone Global Mirror relationship
In Example 9-175, we start the stand-alone Global Mirror relationship that is named GMREL3.
Because the Global Mirror relationship was in the Consistent stopped state and no updates
were made to the primary volume, the relationship quickly enters the Consistent synchronized
state.
Upon the completion of the background copy, the CG_W2K3_GM Global Mirror Consistency
Group enters the Consistent synchronized state.
614 Implementing the IBM System Storage SAN Volume Controller V7.4
state inconsistent_copying
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Global Mirror Consistency Groups or relationships change state.
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
Chapter 9. SAN Volume Controller operations using the command-line interface 615
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 76
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
When all of the Global Mirror relationships complete the background copy, the Consistency
Group enters the Consistent synchronized state, as shown in Example 9-178.
First, we show how to stop and restart the stand-alone Global Mirror relationships and the
Consistency Group.
616 Implementing the IBM System Storage SAN Volume Controller V7.4
9.15.12 Stopping a stand-alone Global Mirror relationship
In Example 9-179, we stop the stand-alone Global Mirror relationship while we enable access
(write I/O) to the primary and the secondary volume. As a result, the relationship enters the
Idling state.
Example 9-180 Stopping a Global Mirror Consistency Group without specifying the -access parameter
IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
Chapter 9. SAN Volume Controller operations using the command-line interface 617
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
If we want to enable access (write I/O) for the secondary volume later, we can reissue the
stoprcconsistgrp command and specify the -access parameter. The Consistency Group
changes to the Idling state, as shown in Example 9-181.
If any updates were performed on the master volume or the auxiliary volume, consistency is
compromised. Therefore, we must issue the -force parameter to restart the relationship, as
shown in Example 9-182. If the -force parameter is not used, the command fails.
Example 9-182 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC2:superuser>startrcrelationship -primary master -force GMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
618 Implementing the IBM System Storage SAN Volume Controller V7.4
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
If any updates were performed on the master volume or the auxiliary volume in any of the
Global Mirror relationships in the Consistency Group, consistency is compromised. Therefore,
we must issue the -force parameter to start the relationship. If the -force parameter is not
used, the command fails.
In Example 9-183, we restart the Consistency Group and change the copy direction by
specifying the auxiliary volumes to become the primaries.
Example 9-183 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
Chapter 9. SAN Volume Controller operations using the command-line interface 619
9.15.17 Switching the copy direction for a Global Mirror relationship
When a Global Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the switchrcrelationship command and
specifying the primary volume.
If the volume that is specified as the primary already is a primary when this command is run,
the command has no effect.
In Example 9-184, we change the copy direction for the stand-alone Global Mirror relationship
and specify the auxiliary volume to become the primary.
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O is inhibited to that
volume when it becomes the secondary. Therefore, careful planning is required before the
switchrcrelationship command is used.
Example 9-184 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
620 Implementing the IBM System Storage SAN Volume Controller V7.4
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.15.18 Switching the copy direction for a Global Mirror Consistency Group
When a Global Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the relationship by using the switchrcconsistgrp command
and specifying the primary volume. If the volume that is specified as the primary already is a
primary when this command is run, the command has no effect.
In Example 9-185, we change the copy direction for the Global Mirror Consistency Group and
specify the auxiliary to become the primary.
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O are inhibited when that
volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.
Example 9-185 Switching the copy direction for a Global Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
IBM_2145:ITSO_SVC2:superuser>switchrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
Chapter 9. SAN Volume Controller operations using the command-line interface 621
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
When Global Mirror operates in cycling mode, changes are tracked and, where needed,
copied to intermediate Change Volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered if a failover occurs. However, lower bandwidth is required to provide
an effective solution because the data transfer can be smoothed over a longer period.
A Global Mirror relationship consists of two volumes: primary and secondary. With SVC 6.3,
each of these volumes can be associated to a Change Volume. Change Volumes are used to
record changes to the remote copy volume. A FlashCopy relationship exists between the
remote copy volume and the Change Volume. This relationship cannot be manipulated as a
normal FlashCopy relationship. Most commands fail by design because this relationship is an
internal relationship.
Cycling mode transmits a series of FlashCopy images from the primary to the secondary, and
it is enabled by using svctask chrcrelationship -cycling=multi.
The primary Change Volume stores changes to be sent to the secondary volume and the
secondary Change Volume is used to maintain a consistent image at the secondary volume.
Every x seconds, the primary FlashCopy mapping is started automatically, where x is the
cycling period and is configurable. Data is then copied to the secondary volume from the
primary Change Volume. The secondary FlashCopy mapping is started if resynchronization is
needed. Therefore, a consistent copy is always at the secondary volume. The cycling period
is configurable and the default value is 300 seconds.
The recovery point objective (RPO) depends on how long the FlashCopy takes to complete. If
the FlashCopy completes within the cycling time, the maximum RPO = 2 x the cycling time;
otherwise, the RPO = 2 x the copy completion time.
622 Implementing the IBM System Storage SAN Volume Controller V7.4
You can estimate the current RPO by using the new freeze_time rcrelationship property. It is
the time of the last consistent image that is present at the secondary. Figure 9-13 shows the
cycling mode with Change Volumes.
In this section, we show how to change the cycling mode of the stand-alone Global Mirror
relationship (GMREL3) and the Consistency Group CG_W2K3_GM Global Mirror relationships
(GMREL1 and GMREL2).
We assume that the source and target volumes were created and that the ISLs and zoning
are in place to enable the SVC systems to communicate. We also assume that the Global
Mirror relationship was established.
Complete the following steps to change the Global Mirror to cycling mode with Change
Volumes:
1. Create thin-provisioned Change Volumes for the primary and secondary volumes at both
sites.
2. Stop the stand-alone relationship GMREL3 to change the cycling mode at the primary site.
3. Set the cycling mode on the stand-alone relationship GMREL3 at the primary site.
4. Set the Change Volume on the master volume relationship GMREL3 at the primary site.
5. Set the Change Volume on the auxiliary volume relationship GMREL3 at the secondary site.
6. Start the stand-alone relationship GMREL3 in cycling mode at the primary site.
7. Stop the Consistency Group CG_W2K3_GM to change the cycling mode at the primary site.
8. Set the cycling mode on the Consistency Group at the primary site.
Chapter 9. SAN Volume Controller operations using the command-line interface 623
9. Set the Change Volume on the master volume relationship GMREL1 of the Consistency
Group CG_W2K3_GM at the primary site.
10.Set the Change Volume on the auxiliary volume relationship GMREL1 at the secondary site.
11.Set the Change Volume on the master volume relationship GMREL2 of the Consistency
Group CG_W2K3_GM at the primary site.
12.Set the Change Volume on the auxiliary volume relationship GMREL2 at the secondary site.
13.Start the Consistency Group CG_W2K3_GM in the cycling mode at the primary site.
Example 9-186 Creating the thin-provisioned volumes for Global Mirror cycling mode
IBM_2145:ITSO_SVC2:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_DB_Pri_CHANGE_VOL
Virtual Disk, id [3], successfully created
IBM_2145:ITSO_SVC2:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_DBLog_Pri_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC2:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_App_Pri_CHANGE_VOL
Virtual Disk, id [5], successfully created
624 Implementing the IBM System Storage SAN Volume Controller V7.4
CG_W2K3_GM consistent_synchronized 50 global
none
2 GMREL3 000002006BE04FC4 ITSO_SVC2 2 GM_App_Pri
0000020061C06FCA ITSO_SVC4 2 GM_App_Sec aux
consistent_synchronized 50 global none
IBM_2145:ITSO_SVC2:superuser>stoprcrelationship GMREL3
9.15.22 Setting the cycling mode on the stand-alone remote copy relationship
In Example 9-188, we set the cycling mode on the relationship by using the
chrcrelationship command. The cyclingmode and masterchange parameters cannot be
entered in the same command.
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
Chapter 9. SAN Volume Controller operations using the command-line interface 625
9.15.24 Setting the Change Volume on the auxiliary volume
In Example 9-190, we set the Change Volume on the auxiliary volume in the secondary site.
From the display, we can see the name of the volume.
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
626 Implementing the IBM System Storage SAN Volume Controller V7.4
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/37/20
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/42/25
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
Example 9-192 Stopping the Consistency Group to change the cycling mode
IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
Chapter 9. SAN Volume Controller operations using the command-line interface 627
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
Example 9-193 Setting the Global Mirror cycling mode on the Consistency Group
IBM_2145:ITSO_SVC2:superuser>chrcconsistgrp -cyclingmode multi CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
9.15.28 Setting the Change Volume on the master volume relationships of the
Consistency Group
In Example 9-194 on page 629, we change both of the relationships of the Consistency
Group to add the Change Volumes on the primary volumes. A display shows the name of the
master Change Volumes.
628 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-194 Setting the Change Volume on the master volume
IBM_2145:ITSO_SVC2:superuser>chrcrelationship -masterchange GM_DB_Pri_CHANGE_VOL
GMREL1
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC2:superuser>
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
Chapter 9. SAN Volume Controller operations using the command-line interface 629
aux_change_vdisk_id
aux_change_vdisk_name
630 Implementing the IBM System Storage SAN Volume Controller V7.4
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
aux_change_vdisk_id 4
aux_change_vdisk_name GM_DBLog_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/07/42
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
Chapter 9. SAN Volume Controller operations using the command-line interface 631
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
Important: The support for migration from 6.3.x.x to 7.4.x.x is limited. Check with your
service representative for the recommended steps.
For more information about the recommended software levels, see this website:
http://www.ibm.com/systems/storage/software/virtualization/svc/index.html
After the software file is uploaded to the system (the /home/superuser/upgrade directory),
you can select the software and apply it to the system. Use the web script and the
applysoftware command. When a new code level is applied, it is automatically installed on all
of the nodes within the system.
The underlying command-line tool runs the sw_preinstall script. This script checks the
validity of the upgrade file and whether it can be applied over the current level. If the upgrade
file is unsuitable, the sw_preinstall script deletes the files to prevent the buildup of invalid
files on the system.
632 Implementing the IBM System Storage SAN Volume Controller V7.4
Precaution before you perform the upgrade
Software installation often is considered to be a client’s task. The SVC supports concurrent
software upgrade. You can perform the software upgrade concurrently with I/O user
operations and certain management activities. However, only limited CLI commands are
operational from the time that the installation command starts until the upgrade operation
ends successfully or is backed out. Certain commands fail with a message that indicates that
a software upgrade is in progress.
Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications might have I/O failures during the software upgrade.
Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem
Device Driver (SDD) command. Example 9-197 shows the output.
Write-through mode: During a software upgrade, periods occur when not all of the nodes
in the system are operational. As a result, the cache operates in write-through mode.
Write-through mode affects the throughput, latency, and bandwidth aspects of
performance.
Verify that your uninterruptible power supply unit configuration is also set up correctly (even if
your system is running without problems). Specifically, ensure that the following conditions
are true:
Your uninterruptible power supply units are all getting their power from an external source
and that they are not daisy chained. Ensure that each uninterruptible power supply unit is
not supplying power to another node’s uninterruptible power supply unit.
The power cable and the serial cable, which come from each node, go back to the same
uninterruptible power supply unit. If the cables are crossed and go back to separate
uninterruptible power supply units, another node might also be shut down mistakenly
during the upgrade while one node is shut down.
Chapter 9. SAN Volume Controller operations using the command-line interface 633
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
You must also ensure that all I/O paths are working for each host that runs I/O operations to
the SAN during the software upgrade. You can check the I/O paths by using the datapath
query commands.
You do not need to check for hosts that have no active I/O operations to the SAN during the
software upgrade.
Upgrade procedure
To upgrade the SVC system software, complete the following steps:
1. Before the upgrade is started, you must back up the configuration (9.17, “Backing up the
SAN Volume Controller system configuration” on page 647) and save the backup
configuration file in a safe place.
2. Before you start to transfer the software code to the clustered system, clear the previously
uploaded upgrade files in the /home/superuser/upgrade SVC system directory, as shown
in Example 9-198.
3. Save the data collection for support diagnosis if you experience problems, as shown in
Example 9-199.
4. List the dump that was generated by the previous command, as shown in Example 9-200.
634 Implementing the IBM System Storage SAN Volume Controller V7.4
12 svc.config.backup.now.xml
13 snap.110711.111003.111031.tgz
5. Save the generated dump in a safe place by using the pscp command, as shown in
Figure 9-14.
Note: The pscp command does not work if you did not upload your PuTTY SSH private
key or if you are not using the user ID and password with the PuTTY pageant agent, as
shown in Figure 9-14.
6. Upload the new software package by using PuTTY Secure Copy. Enter the pscp -load
command, as shown in Example 9-202.
7. Upload the SVC Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the
command, as shown in Example 9-203.
Chapter 9. SAN Volume Controller operations using the command-line interface 635
8. Verify that the packages were successfully delivered through the PuTTY command-line
application by entering the lsdumps command, as shown in Example 9-204.
9. Now that the packages are uploaded, install the SVC Software Upgrade Test Utility, as
shown in Example 9-205.
IBM_2145:ITSO_SVC2:superuser>applysoftware -file
IBM2145_INSTALL_upgradetest_12.31
CMMVC6227I The package installed successfully.
10.Using the svcupgradetest command, test the upgrade for known issues that might prevent
a software upgrade from completing successfully, as shown in Example 9-206.
While the upgrade runs, you can check the status, as shown in Example 9-208.
636 Implementing the IBM System Storage SAN Volume Controller V7.4
system_next_node_name
node2
The new code is distributed and applied to each node in the SVC system:
– During the upgrade, you can issue the lsupdate command to see the status of the
upgrade.
– To verify that the upgrade was successful, you can run the lssystem and lsnodevpd
commands, as shown in Example 9-209. (We truncated the lssystem and lsnodevpd
information for this example.)
IBM_2145:ITSO_SVC2:superuser>lsnodevpd 1
id 1
...
...
system code level: 4 fields
id 1
node_name ITSO_SVCN1
WWNN 0x500507680c000508
code_level 7.4.0.0 (build 103.11.1410200000)
To generate a new log before you analyze unfixed errors, run the dumperrlog command, as
shown in Example 9-210 on page 638.
Chapter 9. SAN Volume Controller operations using the command-line interface 637
Example 9-210 dumperrlog command
IBM_2145:ITSO_SVC2:superuser>dumperrlog
You can add the -prefix parameter to your command to change the default prefix of errlog
to something else, as shown in Example 9-211.
To see the file name, enter the command that is shown in Example 9-212.
Maximum number of event log dump files: A maximum of 10 event log dump files per
node are kept on the system. When the 11th dump is made, the oldest existing dump file
for that node is overwritten. The directory might also hold log files that are retrieved from
other nodes. These files are not counted.
The SVC deletes the oldest file (when necessary) for this node to maintain the maximum
number of files. The SVC does not delete files from other nodes unless you issue the
cleardumps command.
After you generate your event log, you can issue the finderr command to scan the event log
for any unfixed events, as shown in Example 9-213.
As you can see, we have one unfixed event on our system. To analyze this event, we
download it onto our personal computer. To know more about this unfixed event, we review
the event log in more detail. We use the PuTTY Secure Copy process to copy the file from the
system to our local management workstation, as shown in Example 9-214.
Example 9-214 Using the pscp command to copy event logs off from the SVC
In W2K3 → Start → Run → cmd
638 Implementing the IBM System Storage SAN Volume Controller V7.4
ITSO_SVC2_errlog_110711_1 | 6 kB | 6.8 kB/s | ETA: 00:00:00 | 100%
To use the Run option, you must know the location of your pscp.exe file. In our case, it is in
the C:\Program Files (x86)\PuTTY folder.
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
By scrolling through the list or searching for the term “unfixed”, you can find more information
about the problem. You might see more entries in the error log that have the status of unfixed.
After fixing the problem, you can mark the event as fixed in the log by issuing the cherrstate
command against its sequence number, as shown in Example 9-216.
Chapter 9. SAN Volume Controller operations using the command-line interface 639
If you accidentally mark the wrong event as fixed, you can mark it as unfixed again by
entering the same command and appending the -unfix flag to the end, as shown in
Example 9-217.
This command sends all events and warnings to the SVC community on the SNMP manager
with the IP address 9.43.86.160.
The syslog protocol is a client/server standard for forwarding log messages from a sender to
a receiver on an IP network. You can use the syslog to integrate log messages from various
types of systems into a central repository. You can configure SVC to send information to six
syslog servers.
You use the mksyslogserver command to configure the SVC by using the CLI, as shown in
Example 9-219.
The use of this command with the -h parameter gives you information about all of the
available options. In our example, we configure the SVC to use only the default values for our
syslog server.
When we configure our syslog server, we can display the current syslog server configurations
in our system, as shown in Example 9-220.
640 Implementing the IBM System Storage SAN Volume Controller V7.4
9.16.5 Configuring error notification by using an email server
The SVC can use an email server to send event notification and inventory emails to email
users. It can transmit any combination of events, warning, and informational notification types.
The SVC supports up to six email servers to provide redundant access to the external email
network. The SVC uses the email servers in sequence until the email is successfully sent
from the SVC.
Important: Before the SVC can start sending emails, we must run the startemail
command, which enables this service.
The attempt is successful when the SVC receives a positive acknowledgment from an email
server that the email was received by the server.
We can configure an email user that receives email notifications from the SVC system. We
can define 12 users to receive emails from our SVC.
By using the lsemailuser command, we can verify which user is registered and what type of
information is sent to that user, as shown in Example 9-222.
We can also create a user for a SAN superuser, as shown in Example 9-223.
Chapter 9. SAN Volume Controller operations using the command-line interface 641
Node event codes now have the following classifications:
Critical events: Critical events put the node into the service state and prevent the node
from joining the system. The critical events are numbered 500 - 699.
Deleting a node: Deleting a node from a system causes the node to enter the service
state, as well.
Non-critical events: Non-critical events are partial hardware faults, for example, one
power-supply unit (PSU) failed in the 2145-CF8. The non-critical events are numbered
800 - 899.
To display the event log, use the lseventlog command, as shown in Example 9-224.
IBM_2145:ITSO_SVC2:superuser>lseventlog 103
sequence_number 103
first_timestamp 111003111036
first_timestamp_epoch 1317665436
last_timestamp 111003111036
last_timestamp_epoch 1317665436
object_type cluster
object_id
object_name ITSO_SVC2
copy_id
reporting_node_id 1
reporting_node_name SVC2N1
root_sequence_number
event_count 1
status message
fixed no
auto_fixed no
notification_type informational
event_id 981004
event_id_text FC discovery occurred, no configuration changes were detected
error_code
error_code_text
sense1 01 01 00 00 7E 0B 00 00 04 02 00 00 01 00 01 00
sense2 00 00 00 00 10 00 00 00 08 00 08 00 00 00 00 00
sense3 00 00 00 00 00 00 00 00 F2 FF 01 00 00 00 00 00
sense4 0E 00 00 00 FC FF FF FF 03 00 00 00 07 00 00 00
sense5 00 00 06 00 00 00 00 00 00 00 00 00 00 00 00 00
sense6 00 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00
sense7 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
By using these commands, you can view the last events (you can specify the -count
parameter to define how many events to display) that were generated. Use the method that is
described in 9.16.2, “Running the maintenance procedures” on page 637 to upload and
analyze the event log in more detail.
642 Implementing the IBM System Storage SAN Volume Controller V7.4
To clear the event log, issue the clearerrlog command, as shown in Example 9-225.
The use of the -force flag stops any confirmation requests from appearing. When you run
this command, the command clears all of the entries from the event log. This process
proceeds even if unfixed errors are in the log. This process also clears any status events in
the log.
Important: This command is a destructive command for the event log. Use this command
only when you rebuild the system or when you fixed a major problem that caused many
entries in the event log that you do not want to fix manually.
Before you change the licensing, you can display your current licenses by issuing the
lslicense command, as shown in Example 9-226.
The current license settings for the system are displayed in the viewing license settings log
window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror,
Global Mirror, or Virtualization features. The license settings log window also shows the
storage capacity that is licensed for virtualization. Typically, the license settings log contains
entries because feature options must be set as part of the web-based system creation
process.
For example, consider that you purchased another 5 TB of licensing for the Metro Mirror and
Global Mirror feature from your actual 20 TB license. Example 9-227 shows the command
that you enter.
To verify that the changes that you made are reflected in your SVC configuration, you can run
the lslicense command, as shown in Example 9-228 on page 644.
Chapter 9. SAN Volume Controller operations using the command-line interface 643
Example 9-228 lslicense command: Verifying changes
IBM_2145:ITSO_SVC2:superuser>lslicense
used_flash 0.00
used_remote 0.03
used_virtualization 0.75
license_flash 500
license_remote 25
license_virtualization 500
license_physical_disks 0
license_physical_flash off
license_physical_remote off
If you do not supply a file name prefix, the system uses the default errlog_ file name prefix.
The full, default file name is errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is the
node front panel name. If the command is used with the -prefix option, the value that is
entered for the -prefix is used instead of errlog.
The lsdumps -prefix command lists all of the dumps in the /dumps/elogs directory, as
shown in Example 9-229.
The lsdumps -prefix /dumps/feature command lists all of the dumps in the /dumps/feature
directory, as shown in Example 9-230 on page 645.
644 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-230 lsdumps -prefix /dumps/feature command
IBM_2145:ITSO_SVC2:superuser>lsdumps -prefix /dumps/feature
id filename
0 feature.txt
The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name
and prefix is the value that is entered by the user for the -filename parameter in the
settrace command.
The command to list all of the dumps in the /dumps/iotrace directory is lsdumps -prefix
/dumps/iotrace, as shown in Example 9-231.
The file names that are used for storing I/O statistics dumps are
m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether
the statistics are for MDisks or volumes. In these file names, NNNNNN is the node front panel
name.
The command to list all of the dumps that are in the /dumps/iostats directory is lsdumps
-prefix, as shown in Example 9-232.
Chapter 9. SAN Volume Controller operations using the command-line interface 645
Software dump
The lsdumps command lists the contents of the /dumps directory. The general debug
information, software, application dumps, and live dumps are copied into this directory.
Example 9-233 shows the command.
However, files can be copied only from the current configuration node (by using PuTTY
Secure Copy). Therefore, you must run the cpdumps command to copy the files from a
non-configuration node to the current configuration node. You can then copy them to the
management workstation by using PuTTY Secure Copy.
For example, suppose that you discover a dump file and want to copy it to your management
workstation for further analysis. In this case, you must first copy the file to your current
configuration node.
To copy dumps from other nodes to the configuration node, use the cpdumps command.
In addition to the directory, you can specify a file filter. For example, if you specified
/dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in .txt are copied.
Wildcards: The following rules apply to the use of wildcards with the SVC CLI:
The wildcard character is an asterisk (*).
The command can contain a maximum of one wildcard.
When you use a wildcard, you must surround the filter entry with double quotation
marks (" "), as shown in the following example:
cleardumps -prefix "/dumps/elogs/*.txt"
646 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-234 cpdumps command
IBM_2145:ITSO_SVC2:superuser>cpdumps -prefix /dumps/configs n4
After you copy the configuration dump file from node n4 to your configuration node, you can
use PuTTY Secure Copy to copy the file to your management workstation for further analysis.
To clear the dumps, you can run the cleardumps command. Again, you can append the node
name if you want to clear dumps off a node other than the current configuration node (the
default for the cleardumps command).
The commands in Example 9-235 clear all logs or dumps from the SVC node SVC1N2.
The backup command extracts configuration data from the system and saves it to the
svc.config.backup.xml file in the /tmp directory. This process also produces an
svc.config.backup.sh file. You can study this file to see what other commands were issued
to extract information.
An svc.config.backup.log log is also produced. You can study this log for the details of what
was done and when it was done. This log also includes information about the other
commands that were issued.
We further advise that you change all of the objects that have default names to non-default
names. Otherwise, a warning is produced for objects with default names.
Also, the object with the default name is restored with its original name with an _r appended.
The prefix _(underscore) is reserved for backup and restore command usage. Do not use this
prefix in any object names.
Chapter 9. SAN Volume Controller operations using the command-line interface 647
Important: The tool backs up logical configuration data only, not client data. It does not
replace a traditional data backup and restore tool. Instead, this tool supplements a
traditional data backup and restore tool with a way to back up and restore the client’s
configuration.
To provide a complete backup and disaster recovery solution, you must back up user
(non-configuration) data and configuration (non-user) data. After the restoration of the SVC
configuration, you must fully restore user (non-configuration) data to the system’s disks.
9.17.1 Prerequisites
You must meet the following prerequisites:
All nodes are online.
No object name can begin with an underscore.
All objects have non-default names, in effect, names that are not assigned by the SVC.
Although we advise that objects have non-default names at the time that the backup is taken,
this prerequisite is not mandatory. Objects with default names are renamed when they are
restored.
As you can see in Example 9-236, we received a CMMVC6130W Cluster ITSO_SVC4 with
inter-cluster partnership fully_configured will not be restored message. This
message indicates that individual systems in a multisystem environment must be backed up
individually.
If recovery is required, run the recovery commands only on the system where the recovery is
required.
648 Implementing the IBM System Storage SAN Volume Controller V7.4
3. If a sufficiently severe failure occurs, the system might be lost. Both the configuration data
(for example, the system definitions of hosts, I/O Groups, managed disk groups (MDGs),
and MDisks) and the application data on the virtualized disks are lost.
In this scenario, it is assumed that the application data can be restored from normal client
backup procedures. However, before you can perform this restoration, you must reinstate
the system as it was configured at the time of the failure. Therefore, you restore the same
MDGs, I/O Groups, host definitions, and volumes that existed before the failure. Then, you
can copy the application data back onto these volumes and resume operations.
4. Recover the hosts, SVCs, disk controller systems, disk hardware, and SAN fabric. The
hardware and SAN fabric must physically be the same as the hardware and SAN fabric
that were used before the failure.
5. Reinitialize the clustered system with the configuration node; the other nodes are
recovered when the configuration is restored.
6. Restore your clustered system configuration by using the backup configuration file that
was generated before the failure.
7. Restore the data on your volumes by using your preferred restoration solution or with help
from IBM Support.
8. Resume normal operations.
Important: Always consult IBM Support before you restore the SVC clustered system
configuration from the backup. IBM Support can assist you in analyzing the root cause of
why the system configuration was lost.
After the svcconfig restore -execute command is started, consider any prior user data on
the volumes destroyed. The user data must be recovered through your usual application data
backup and restore process.
For more information, see the V6.4.0 Command-Line Interface User’s Guide for IBM System
Storage SAN Volume Controller and Storwize V7000, GC27-2287.
For more information about the SVC configuration backup and restore functions, see V6.4.0
Software Installation and Configuration Guide for IBM System Storage SAN Volume
Controller, GC27-2286.
When the clear command is used, you erase the files in the /tmp directory. This command
does not clear the running configuration and prevent the system from working. However, the
command clears all of the configuration backup that is stored in the /tmp directory, as shown
in Example 9-238 on page 650.
Chapter 9. SAN Volume Controller operations using the command-line interface 649
Example 9-238 svcconfig clear command
IBM_2145:ITSO_SVC2:superuser>svcconfig clear -all
.
CMMVC6155I SVCCONFIG processing completed successfully
chquorum -mdisk 9 2
IBM_2145:ITSO_SVC2:superuser>lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no
650 Implementing the IBM System Storage SAN Volume Controller V7.4
1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
2 online 9 mdisk9 3 ITSO-DS5000 no mdisk no
As you can see in Example 9-240 on page 650, the quorum index 2 was moved from MDisk3
on the ITSO-DS3500 controller to MDisk9 on the ITSO-DS5000 controller.
When the command syntax is shown, you see certain parameters in square brackets, for
example, [parameter], which indicate that the parameter is optional in most (if not all)
instances. Any information that is not in square brackets is required information. You can view
the syntax of a command by entering one of the following commands:
sainfo -? shows a complete list of information commands.
satask -? shows a complete list of task commands.
sainfo commandname -? shows the syntax of information commands.
satask commandname -? shows the syntax of task commands.
Example 9-241 shows the two sets of commands with Service Assistant.
Chapter 9. SAN Volume Controller operations using the command-line interface 651
startservice
stopnode
stopservice
t3recovery
Important: You must use the sainfo and satask command sets under the direction of IBM
Support. The incorrect use of these commands can lead to unexpected results.
For more information about how to troubleshoot and collect data from the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
Use the lsfabric command regularly to obtain a complete picture of the devices that are
connected and visible from the SVC cluster through the SAN. The lsfabric command
generates a report that displays the FC connectivity between nodes, controllers, and hosts.
652 Implementing the IBM System Storage SAN Volume Controller V7.4
20690080E51B09E8 041900 1 SVC2N1 50050768013027E2 2 040800
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 1 SVC2N1 50050768012027E2 4 040900
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 2 SVC1N2 5005076801305034 2 040A00
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 2 SVC1N2 5005076801205034 4 040B00
inactive ITSO-DS3500 controller
50050768013037DC 041400 1 SVC2N1 50050768013027E2 2 040800
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 1 SVC2N1 50050768012027E2 4 040900
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 2 SVC1N2 5005076801305034 2 040A00
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 2 SVC1N2 5005076801205034 4 040B00
active ITSOSVC3N1 ITSO_SVC3 node
5005076801101D1C 031500 1 SVC2N1 50050768014027E2 1 030800
active ITSOSVC3N2 ITSO_SVC3 node
5005076801101D1C 031500 1 SVC2N1 50050768011027E2 3 030900
active ITSOSVC3N2 ITSO_SVC3 node
5005076801101D1C 031500 2 SVC1N2 5005076801405034 1 030A00
active ITSOSVC3N2 ITSO_SVC3 node
.....
Above and below rows have been removed for brevity
.....
5005076801201D22 021300 1 SVC2N1 50050768013027E2 2 040800
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 1 SVC2N1 50050768012027E2 4 040900
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 2 SVC1N2 5005076801305034 2 040A00
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 2 SVC1N2 5005076801205034 4 040B00
active SVC2N2 ITSO_SVC2 node
50050768011037DC 011513 1 SVC2N1 50050768014027E2 1 030800
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 1 SVC2N1 50050768011027E2 3 030900
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 2 SVC1N2 5005076801405034 1 030A00
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 2 SVC1N2 5005076801105034 3 030B00
active ITSOSVC3N1 ITSO_SVC3 node
5005076801301D22 021200 1 SVC2N1 50050768013027E2 2 040800
active SVC2N2 ITSO_SVC2 node
5005076801301D22 021200 1 SVC2N1 50050768012027E2 4 040900
active SVC2N2 ITSO_SVC2 node
....
Above and below rows have been removed for brevity
....
For more information about the lsfabric command, see the V6.4.0 Command-Line Interface
User’s Guide for IBM System Storage SAN Volume Controller and Storwize V7000,
GC27-2287.
Chapter 9. SAN Volume Controller operations using the command-line interface 653
9.22 T3 recovery process
A procedure, which is known as T3 recovery, was tested and used in select cases in which a
system was completely destroyed. One example is simultaneously pulling power cords from
all nodes to their uninterruptible power supply units. In this case, all nodes boot up to
node error 578 when the power is restored.
In certain circumstances, this procedure can recover most user data. However, it is not to be
used by the client or an IBM service support representative (SSR) without the direct
involvement from IBM Level 3 technical support. This procedure is not published, but we refer
to it here only to indicate that the loss of a system can be recoverable without total data loss.
However, this procedure requires a restoration of application data from the backup. T3
recovery is an extremely sensitive procedure that is to be used as a last resort only, and it
cannot recover any data that was destaged from cache at the time of the total system failure.
654 Implementing the IBM System Storage SAN Volume Controller V7.4
10
The information is divided into normal operations and advanced operations. We explain the
basic configuration procedures that are required to get your SVC environment running as
quickly as possible by using its GUI.
Multiple users can be logged in to the GUI at any time. However, no locking mechanism
exists, so be aware that if two users change the same object at the same time, the last action
that is entered from the GUI is the one that takes effect.
Important: Data entries that are made through the GUI are case-sensitive.
In later sections of this chapter, we expect users to be able to navigate to this panel without
our explaining the procedure each time.
Dynamic menu
From any page in the SVC GUI, you always can access the dynamic menu. The SVC GUI
dynamic menu is on the left side of the SVC GUI window. To browse by using this menu,
hover the mouse pointer over the various icons and choose a page that you want to display,
as shown in Figure 10-2 on page 657.
656 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-2 The dynamic menu in the left column of the IBM SAN Volume Controller GUI
The IBM SAN Volume Controller dynamic menu consists of multiple panels. These panels
group common configuration and administration objects and present individual administrative
objects to the IBM SAN Volume Controller GUI users, as shown in Figure 10-3.
Suggested tasks
After a successful login, the SVC opens a pop-up window with suggested tasks, notifying
administrators that several key SVC functions are not yet configured. You cannot miss or
overlook this window. However, you can close the pop-up window and perform tasks at any
time.
Chapter 10. SAN Volume Controller operations using the GUI 657
Figure 10-4 shows the suggested tasks in the System panel.
In this case, the SVC GUI warns you that so far no volume is mapped to the host or that no
host is defined yet. You can directly perform the task from this window or cancel it and
execute the procedure later at any convenient time. Other suggested tasks that typically
appear after the initial configuration of the SVC are to create a volume and configure a
storage pool, for example.
The dynamic IBM SAN Volume Controller GUI menu contains the following panels
(Figure 10-3 on page 657):
Monitoring
Pools
Volumes
Hosts
Copy Services
Access
Settings
658 Implementing the IBM System Storage SAN Volume Controller V7.4
If non-critical issues exist for your system nodes, external storage controllers, or remote
partnerships, a new status area opens next to the Health Status widget, as shown in
Figure 10-7.
You can fix the error by clicking Status Alerts to direct you to the Events panel fix procedures.
If a critical system connectivity error exists, the Health Status bar turns red and alerts the
system administrator for immediate action, as shown in Figure 10-8.
The following information is displayed in this storage allocation indicator window. To view all of
the information, you must use the up and down arrow keys:
Allocated capacity
Virtual capacity
Compression ratio
Chapter 10. SAN Volume Controller operations using the GUI 659
Important: Since version 7.4, the capacity units use the binary prefixes that are defined by
the International Electrotechnical Commission (IEC). The prefixes represent a
multiplication by 1024 with symbols GiB (gibibyte), TiB (tibibyte), and PiB (pebibyte).
By clicking within the square (as shown in Figure 10-5 on page 658), this area provides
detailed information about running and recently completed tasks, as shown in Figure 10-11.
Help
Another useful interface feature is integrated help. You can access help for certain fields and
objects by hovering the mouse cursor over the question mark icon next to the field or object
(Figure 10-12 on page 661). Panel-specific help is available by clicking Need Help or by using
the Help link in the upper-right corner of the GUI.
660 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-12 Access to panel-specific help
Overview window
In SVC Version 7.4, the welcome window of the GUI changed from the well-known former
Overview panel to the new System panel, as shown in Figure 10-1 on page 656. Clicking
Overview (Figure 10-13) in the upper-right corner of the System panel opens a modified
Overview panel with options that are similar to previous versions of the software.
The following content of the chapter helps you to understand the structure of the panel and
how to navigate to various system components to manage them more efficiently and quickly.
Table filtering
On most pages, a Filter option (magnifying glass icon) is available on the upper-left side of the
window. Use this option if the list of object entries is too long.
Chapter 10. SAN Volume Controller operations using the GUI 661
Complete the following steps to use search filtering:
1. Click Filter on the upper-left side of the window, as shown in Figure 10-14, to open the
search box.
2. Enter the text string that you want to filter and press Enter.
3. By using this function, you can filter your table that is based on column names. In our
example, a volume list is displayed that contains the names that include DS somewhere in
the name. DS is highlighted in amber, as shown in Figure 10-15. The search option is not
case-sensitive.
4. Remove this filtered view by clicking the reset filter icon, as shown in Figure 10-16.
Filtering: This filtering option is available in most menu options of the GUI.
Table information
In the table view, you can add or remove the information in the tables on most pages.
For example, on the Volumes page, complete the following steps to add a column to our table:
1. Right-click any column headers of the table or select the icon in the left corner of the table
header. A list of all of the available columns appears, as shown in Figure 10-17 on
page 663.
662 Implementing the IBM System Storage SAN Volume Controller V7.4
right-click
2. Select the column that you want to add (or remove) from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 10-18.
3. You can repeat this process several times to create custom tables to meet your
requirements.
4. You can always return to the default table view by selecting Restore Default View in the
column selection menu, as shown in Figure 10-19 on page 664.
Chapter 10. SAN Volume Controller operations using the GUI 663
Figure 10-19 Restore default table view
Sorting: By clicking a column, you can sort a table that is based on that column in
ascending or descending order.
10.1.3 Help
To access online help, move the mouse pointer over the question mark (?) icon in the
upper-right corner of any panel and select the context-based help topic, as shown in
Figure 10-21 on page 665. Depending on the panel you are working with, the help displays its
context item.
664 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-21 Help link
By clicking Information Center, you are directed to the public IBM Knowledge Center, which
provides all of the information about the SVC systems, as shown in Figure 10-22.
Chapter 10. SAN Volume Controller operations using the GUI 665
As of V7.4, the option that was formerly called System Details is integrated into the device
overview on the general System panel, which is available after logging in or when clicking the
option System from the Monitoring menu. For more information, see “Overview window” on
page 661.
666 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-24 System overview that shows capacity
When you click a specific component of a node, a pop-up window indicates the details of the
disk drives in the unit. By right-clicking and selecting Properties, you see detailed technical
parameters, such as capacity, interface, rotation speed, and the drive status (online or offline).
1 3
right-click
2
Figure 10-25 Component details
In an environment with multiple SVC systems, you can easily direct the onsite personnel or
technician to the correct device by enabling the identification LED on the front panel. Click
Identify in the pop-up window that is shown in Figure 10-24. Then, wait for confirmation from
the technician that the device in the data center was correctly identified.
Chapter 10. SAN Volume Controller operations using the GUI 667
After the confirmation, click Turn LED Off (Figure 10-26).
Alternatively, you can use the SVC command-line interface (CLI) to get the same results.
Type the following commands in this sequence:
1. Type svctask chnode -identify yes 1 (or just type chnode -identify yes 1).
2. Type svctask chnode -identify no 1 (or just type chnode -identify no 1).
Each system that is shown in the Dynamic system view in the middle of a System panel can
be rotated by 180° to see its rear side. Click the rotation arrow in the lower-right corner of the
device, as illustrated in Figure 10-27.
668 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-28 System details
The output is shown in Figure 10-29. By using this menu, you can also power off the machine
(without an option for remote start), remove the node or enclosure from the system, or list all
of the volumes that are associated with the system, for example.
Chapter 10. SAN Volume Controller operations using the GUI 669
In addition, from the System panel, you can see an overview of important status information
and the parameters of the Fibre Channel (FC) ports (Figure 10-30).
By choosing Fibre Channel Ports, you can see the list and status of the available FC ports
with their worldwide port names (WWPNs), as shown in Figure 10-31.
10.2.3 Events
The Events option, which you select from the Monitoring menu, tracks all informational,
warning, and error messages that occur in the system. You can apply various filters to sort
them or export them to an external comma-separated values (CSV) file. A CSV file can be
created from the information that is shown here. Figure 10-32 on page 671 provides an
example of records in the SVC Event log.
670 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-32 Event log list
10.2.4 Performance
The Performance panel reports the general system statistics that relate to processor (CPU)
utilization, host and internal interfaces, volumes, and MDisks. You can switch between MBps
or IOPS or even drill down in the statistics to the node level. This capability might be useful
when you compare the performance of each node in the system if problems exist after a node
failover occurs. See Figure 10-33.
The performance statistics in the GUI show, by default, the latest five minutes of data. To see
details of each sample, click the graph and select the time stamp, as shown in Figure 10-34
on page 672.
Chapter 10. SAN Volume Controller operations using the GUI 671
Figure 10-34 Sample details
The charts that are shown in Figure 10-34 represent five minutes of the data stream. For
in-depth storage monitoring and performance statistics with historical data about your SVC
system, use the IBM Tivoli Storage Productivity Center for Disk or IBM Virtual Storage Center.
For more information about a specific controller and MDisks, click the plus sign (+) that is to
the left of the controller icon and name.
672 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-36 Renaming a storage system
2. Enter the new name that you want to assign to the controller and then click Rename, as
shown in Figure 10-37.
Controller name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.
3. A task is started to change the name of this storage system. When it completes, you can
close this window. The new name of your controller is displayed on the External Storage
panel.
In this stretched mode, it is mandatory to assign a site to each component in the SVC
environment (SVC nodes and storage controllers). In a normal topology, the site assignment
is not necessary, and we do not recommend that you configure it. It might affect certain
procedures, such as volume migration or FlashCopy operations.
Chapter 10. SAN Volume Controller operations using the GUI 673
10.3.4 Discovering MDisks from the external panel
You can discover MDisks from the External Storage panel. Complete the following steps to
discover new MDisks:
1. Ensure that no existing controllers are highlighted. Click Actions.
2. Click Detect MDisks to discover MDisks from this controller, as shown in Figure 10-38.
3. When the task completes, click Close to see the newly detected MDisks.
674 Implementing the IBM System Storage SAN Volume Controller V7.4
You can add new columns to the table, as described in “Table information” on page 662.
To retrieve more information about a specific storage pool, select any storage pool in the left
column. The upper-right corner of the panel, which is shown in Figure 10-40, contains the
following information about this pool:
Status
Number of MDisks
Number of volume copies
Whether Easy Tier is active on this pool
Site assignment
Volume allocation
Capacity
Change the view by selecting MDisks by Pools. Select the pool with which you want to work
and click the plus sign (+), which expands the information. This panel displays the MDisks
that are present in this storage pool, as shown in Figure 10-41.
Chapter 10. SAN Volume Controller operations using the GUI 675
2. In the first window of wizard, complete the following elements, as shown in Figure 10-43:
a. Specify a name for the storage pool. If you do not provide a name, the SVC
automatically generates the name mdiskgrpx, where x is the ID sequence number that
is assigned by the SVC internally.
Storage pool name: You can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The name can be 1 - 63 characters. The name is
case-sensitive. The name cannot start with a number or the pattern “MDiskgrp”
because this prefix is reserved for SVC internal assignment only.
b. Optional: Change the icon that is associated with this storage pool, as shown in
Figure 10-43.
c. In addition, you can specify the following information and then click Next:
• Extent Size under the Advanced Settings section. The default is 1 GiB.
• Warning threshold to log a message to the event log when the capacity is
exceeded. The default is 80%.
3. In the next window (as shown in Figure 10-44), complete the following steps to specify the
MDisks that you want to associate with the new storage pool:
a. Select the MDisks that you want to add to this storage pool.
Tip: To add multiple MDisks, press and hold the Ctrl key and click selected items.
676 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-44 Create Pool window (step 2 of 2)
4. Close the task completion window. In the Storage Pools panel (as shown in Figure 10-45),
the new storage pool is displayed.
2. Enter the new name that you want to assign to the storage pool and click Rename, as
shown in Figure 10-47.
Chapter 10. SAN Volume Controller operations using the GUI 677
Figure 10-47 Changing the name for a storage pool
Storage pool name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.
2. In the Delete Pool window, click Delete to confirm that you want to delete the storage pool,
including its MDisks, as shown in Figure 10-49. If configured volumes exist within the
storage pool that you are deleting, you must unmap and delete the volumes first.
678 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-49 Deleting a pool
New in V7.4: The SVC 7.4 does not allow the user to directly delete pools that contain any
active volumes.
To retrieve more information about a specific MDisk, complete the following steps:
1. From the expanded view of a storage pool in the MDisks panel, select an MDisk.
2. Click Properties, as shown in Figure 10-50 on page 680.
Chapter 10. SAN Volume Controller operations using the GUI 679
Figure 10-50 MDisks menu
3. For the selected MDisk, an overview is displayed that shows its parameters and
dependent volumes, as shown in Figure 10-51.
4. Click the Dependent Volumes tab to display information about the volumes that are on
this MDisk, as shown in Figure 10-52. For more information about the volume panel, see
10.8, “Working with volumes” on page 699.
680 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Click Close to return to the previous window.
3. In the Rename MDisk window (Figure 10-54), enter the new name that you want to assign
to the MDisk and click Rename.
MDisk name: The name can consist of the letters A - Z and a - z, the numbers 0 - 9,
the dash (-), and the underscore (_) character. The name can be 1 - 63 characters.
Chapter 10. SAN Volume Controller operations using the GUI 681
10.5.3 Discovering MDisks
Complete the following steps to discover newly assigned MDisks:
1. In the SVC dynamic menu, move the pointer over Pools and click MDisks by Pools.
2. Ensure that no existing storage pools are selected. Click Actions.
3. Click Detect MDisks, as shown in Figure 10-55.
Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers
(LUNs) from your subsystem are correctly assigned to the SVC. Also, check that correct
zoning is in place. For example, ensure that the SVC can see the disk subsystem.
Site awareness: Do not assign sites to the SVC nodes and external storage controllers
in a standard, normal topology. Site awareness is intended primarily for SVC stretched
clusters. If any MDisks or controllers appear offline after detection, remove the site
assignment from the SVC node or controller and detect the MDisks again.
682 Implementing the IBM System Storage SAN Volume Controller V7.4
10.5.4 Assigning MDisks to a storage pool
If empty storage pools exist or you want to assign more MDisks to your pools that already
have existing MDisks, use the following steps:
1. From the MDisks by Pools panel, select the unmanaged MDisk that you want to add to a
storage pool.
2. Click Actions → Assign to Pool, as shown in Figure 10-57.
Alternative: You can also access the Assign to Pool action by right-clicking an MDisk.
3. From the Add MDisk to Pool window, select to which pool you want to add this MDisk, and
then, click Add to Pool, as shown in Figure 10-58.
Chapter 10. SAN Volume Controller operations using the GUI 683
Figure 10-59 Actions: Unassign from Pool
Alternative: You can also access the Unassign from Pool action by right-clicking an
unmanaged MDisk.
3. From the Remove MDisk from Pool window (Figure 10-59), you must validate the number
of MDisks that you want to remove from this pool. This verification was added to secure
the process of deleting data.
If volumes are using the MDisks that you are removing from the storage pool, you must
select Remove the MDisk from the pool even if it has data on it. The system migrates
the data to other MDisks in the pool to confirm the removal of the MDisk.
4. Click Delete, as shown in Figure 10-60.
684 Implementing the IBM System Storage SAN Volume Controller V7.4
When the migration is complete, the MDisk status changes to Unmanaged. Ensure that
the MDisk remains accessible to the system until its status becomes Unmanaged. This
process might take time. If you disconnect the MDisk before its status becomes
Unmanaged, all of the volumes in the pool go offline until the MDisk is reconnected.
An error message is displayed (as shown in Figure 10-61) if insufficient space exists to
migrate the volume data to other extents on other MDisks in that storage pool.
10.6 Migration
For more information about data migration, see Chapter 6, “Data migration” on page 241.
Host configuration: For more information about connecting hosts to the SVC in a SAN
environment, see Chapter 5, “Host configuration” on page 169.
A host system is a computer that is connected to the SVC through an FC interface, Fibre
Channel over Ethernet (FCoE), or an Internet Protocol (IP) network.
A host object is a logical object in the SVC that represents a list of worldwide port names
(WWPNs) and a list of Internet Small Computer System Interface (iSCSI) names that identify
the interfaces that the host system uses to communicate with the SVC. iSCSI names can be
iSCSI-qualified names (IQN) or extended unique identifiers (EUI).
A typical configuration has one host object for each host system that is attached to the SVC. If
a cluster of hosts accesses the same storage, you can add host bus adapter (HBA) ports from
several hosts to one host object to simplify a configuration. A host object can have both
WWPNs and iSCSI names.
Chapter 10. SAN Volume Controller operations using the GUI 685
The following methods can be used to visualize and manage your SVC host objects from the
SVC GUI Hosts menu selection:
Use the Hosts panel, as shown in Figure 10-62.
Use the Volumes by Hosts panel, as shown in Figure 10-65 on page 687.
686 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-65 Volumes by Hosts panel
Important: Several actions on the hosts are specific to the Ports by Host panel or the Host
Mapping panel, but all of these actions and others are accessible from the Hosts panel. For
this reason, all actions on hosts are run from the All Hosts panel.
You can add information (new columns) to the table in the Hosts panel, as shown in
Figure 10-17 on page 663. For more information, see “Table information” on page 662.
To retrieve more information about a specific host, complete the following steps:
1. In the table, select a host.
2. Click Actions → Properties, as shown in Figure 10-66.
Alternative: You can also access the Properties action by right-clicking a host.
3. You are presented with information for a host in the Overview window, as shown in
Figure 10-67 on page 688.
Chapter 10. SAN Volume Controller operations using the GUI 687
Figure 10-67 Host details: Overview
Show Details option: To obtain more information about the hosts, select Show Details
(Figure 10-67).
4. On the Mapped Volumes tab (Figure 10-68), you can see the volumes that are mapped to
this host.
5. The Port Definitions tab, as shown in Figure 10-69 on page 689, displays attachment
information, such as the WWPNs that are defined for this host or the iSCSI IQN that is
defined for this host.
688 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-69 Host details: Port Definitions tab
When you finish viewing the details, click Close to return to the previous window.
Note: The FCoE hosts are listed under the FC Hosts Add Menu in the SVC GUI. Click
Fire Channel Hosts to access the FCoE host options. (See Figure 10-71 on page 690.)
3. Select Fibre Channel Host from the two available connection types, as shown in
Figure 10-71 on page 690.
Chapter 10. SAN Volume Controller operations using the GUI 689
Figure 10-71 Create a Fibre Channel host
4. In the Add Host window (Figure 10-72 on page 691), enter a name for your host in the
Host Name field.
Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
provide a name, use letters A - Z and a - z, numbers 0 - 9, or the underscore (_)
character. The host name can be 1 - 63 characters.
5. In the Fibre Channel Ports section, use the drop-down list box to select the WWPNs that
correspond to your HBA or HBAs and then click Add Port to List. Repeat this step to add
more ports.
Deleting an FC port: If you added the wrong FC port, you can delete it from the list by
clicking the red X.
If your WWPNs do not display, click Rescan to rediscover the available WWPNs that are
new since the last scan.
WWPN still not displayed: In certain cases, your WWPNs still might not display, even
though you are sure that your adapter is functioning (for example, you see the WWPN
in the switch name server) and your SAN zones are set up correctly. To correct this
situation, enter the WWPN of your HBA or HBAs into the drop-down list box and click
Add Port to List. It is displayed as unverified.
6. If you need to modify the I/O Group or Host Type, you must select Advanced in the
Advanced Settings section to access these Advanced Settings, as shown in Figure 10-72
on page 691. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these hosts, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that use MPxIO.
690 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-72 Creating a Fibre Channel host
7. Click Add Host, as shown in Figure 10-72. After you return to the Hosts panel
(Figure 10-73 on page 692), you can see the newly added FC host.
Chapter 10. SAN Volume Controller operations using the GUI 691
Figure 10-73 New Fibre Channel host
iSCSI-attached hosts
To create a host that uses the iSCSI connection type, complete the following steps:
1. To go to the Hosts panel from the SVC System panel on Figure 10-1 on page 656, move
the mouse pointer over the Hosts selection and click Hosts.
2. Click Add Host, as shown in Figure 10-70 on page 689, and select iSCSI Host.
692 Implementing the IBM System Storage SAN Volume Controller V7.4
3. In the Add Host window (as shown in Figure 10-74), enter a name for your host in the Host
Name field.
Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
want to provide a name, you can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The host name can be 1 - 63 characters.
4. In the iSCSI ports section, enter the iSCSI initiator or IQN as an iSCSI port and then click
Add Port to List. This IQN is obtained from the server and generally has the same
purpose as the WWPN. Repeat this step to add more ports.
Deleting an iSCSI port: If you add the wrong iSCSI port, you can delete it from the list
by clicking the red X.
If needed, select Use CHAP authentication (all ports), as shown in Figure 10-74, and
enter the Challenge Handshake Authentication Protocol (CHAP) secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system’s properties or for each host definition. The CHAP must be identical on
the server and the system or host definition. You can create an iSCSI host definition
without the use of a CHAP.
5. If you need to modify the I/O Group or Host Type, you must select the Advanced option in
the Advanced Settings section to access these settings, as shown in Figure 10-72 on
page 691. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these types, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that are using
MPxIO.
Chapter 10. SAN Volume Controller operations using the GUI 693
10.7.3 Renaming a host
Complete the following steps to rename a host:
1. In the table, select the host that you want to rename.
2. Click Actions → Rename, as shown in Figure 10-75.
Alternatives: Two other methods can be used to rename a host. You can right-click a
host and select Rename, or you can use the method that is described in Figure 10-76
on page 694.
3. In the Rename Host window, enter the new name that you want to assign and click
Rename, as shown in Figure 10-76.
694 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-77 Remove action
2. The Remove Host window opens, as shown in Figure 10-78. In the Verify the number of
hosts that you are deleting field, enter the number of hosts that you want to remove. This
verification was added to help you avoid deleting the wrong hosts inadvertently.
If volumes are still associated with the host and if you are sure that you want to delete the
host even though these volumes will be no longer accessible, select Remove the host
even if volumes are mapped to them. These volumes will no longer be accessible to
the hosts.
3. Click Delete to complete the process, as shown in Figure 10-78.
Chapter 10. SAN Volume Controller operations using the GUI 695
Tip: You can also right-click a host and select Modify Mappings.
3. In the Modify Host Mappings window (Figure 10-80), select the volume or volumes that
you want to map to this host and move each volume to the table on the right by clicking the
two greater than symbols (>>). If you must remove the volumes, select the volume and
click the two less than symbols (<<).
4. In the table on the right, you can edit the SCSI ID by selecting a mapping that is
highlighted in yellow, which indicates a new mapping. Click Edit SCSI ID (Figure 10-81 on
page 697).
696 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-81 Changing the SCSI ID
When you attempt to map a volume that is already mapped to another host, a warning
pop-up window appears, prompting for confirmation (Figure 10-82). Volumes that are
mapped to multiple hosts are wanted in clustered or fault-tolerant systems, for example.
Changing a SCSI ID: You can change the SCSI ID only on new mappings. To edit a
mapping SCSI ID, you must unmap the volume and re-create the map to the volume.
5. In the Edit SCSI ID window, change SCSI ID and then click OK, as shown in Figure 10-83
on page 698.
Chapter 10. SAN Volume Controller operations using the GUI 697
Figure 10-83 Modify host mappings window: Edit SCSI ID
6. After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships.
Tip: You can also right-click a host and select Modify Mappings.
698 Implementing the IBM System Storage SAN Volume Controller V7.4
2. From the Unmap from Host window (Figure 10-85), enter the number of mappings that you
want to remove in the “Verify the number of mappings that this operation affects” field. This
verification helps you to avoid deleting the wrong hosts unintentionally.
3. Click Unmap to remove the host mapping or mappings. You are returned to the Hosts
panel.
You can visualize and manage your volumes by using the following methods:
Use the Volumes panel, as shown in Figure 10-86.
Chapter 10. SAN Volume Controller operations using the GUI 699
Or use the Volumes by Pool panel, as shown in Figure 10-87.
Important: Several actions on the hosts are specific to the Volumes by Pool panel or to the
Volumes by Host panel. However, all of these actions and others are accessible from the
Volumes panel. We run all of the actions in the following sections from the Volumes panel.
You can add information (new columns) to the table in the Volumes panel, as shown in
Figure 10-17 on page 663. For more information, see “Table information” on page 662.
700 Implementing the IBM System Storage SAN Volume Controller V7.4
To retrieve more information about a specific volume, complete the following steps:
1. In the table, select a volume and click Actions → Properties, as shown in Figure 10-89.
Tip: You can also access the Properties action by right-clicking a volume name.
The Overview tab shows information about a volume, as shown in Figure 10-90.
The Host Maps tab (Figure 10-91 on page 702) displays the hosts that are mapped with
this volume.
Chapter 10. SAN Volume Controller operations using the GUI 701
Figure 10-91 Volume properties: Volume is mapped to this host
The Member MDisks tab (Figure 10-92) displays the MDisks that are used for this volume.
You can perform actions on the MDisks, such as removing them from a pool, adding them
to a tier, renaming them, showing their dependent volumes, or displaying their properties.
2. When you finish viewing the details, click Close to return to the Volumes panel.
3. Select one of the following presets, as shown in Figure 10-94 on page 703:
– Generic: Create volumes that use a fully allocated (thick) amount of capacity from the
selected storage pool.
– Thin Provision: Create volumes whose capacity is virtual (seen by the host), but that
use only the real capacity that is written by the host application. The virtual capacity of
a thin-provisioned volume often is larger than its real capacity.
– Mirror: Create volumes with two physical copies that provide data protection. Each
copy can belong to a separate storage pool to protect data from storage failures.
– Thin Mirror: Create volumes with two physical copies to protect data from failures while
using only the real capacity that is written by the host application.
– Compressed: Create volumes whose data is compressed while it is written to disk,
which saves more space.
702 Implementing the IBM System Storage SAN Volume Controller V7.4
Changing the preset: For our example, we chose the Generic preset. However,
whatever selected preset you choose, you can reconsider your decision later by
customizing the volume when you click the Advanced option.
4. After selecting a preset (in our example, Generic), you must select the storage pool on
which the data is striped, as shown in Figure 10-94.
Figure 10-94 Creating a volume: Select preset and the storage pool
5. After you select the storage pool, the window is updated automatically. You must enter the
following information, as shown in Figure 10-95 on page 704:
– Enter a volume quantity. You can create multiple volumes at the same time by using an
automatic sequential numbering suffix.
– Enter a name if you want to create a single volume, or a naming prefix if you want to
create multiple volumes.
Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The host name can be 1 - 63 characters.
– Enter the size of the volume that you want to create and select the capacity unit of
measurement (bytes, KiB, MiB, GiB, or TiB) from the list.
An updated summary automatically appears in the bottom of the window to show the
amount of space that is used and the amount of free space that remains in the pool.
Chapter 10. SAN Volume Controller operations using the GUI 703
Figure 10-95 Create Volume: Volume details
Naming: When you create more than one volume, the wizard does not prompt you
for a name for each volume that is created. Instead, the name that you use here
becomes the prefix and a number (starting at zero) is appended to this prefix as
each volume is created. You can modify a starting suffix to any numeric value that
you prefer (whole non-negative numbers). Modifying the ending value increases or
decreases the volume quantity that is based on the whole number count.
6. You can activate and customize advanced features, such as thin-provisioning or mirroring,
depending on the preset that you selected. To access these settings, click Advanced.
On the Characteristics tab (Figure 10-96 on page 705), you can set the following options:
– General: Format the new volume by selecting Format Before Use. (Formatting writes
zeros to the volume before it can be used; it writes zeros to its MDisk extents.)
– Locality: Choose a caching I/O Group and then select a preferred node. You can leave
the default values for SVC auto-balance. After you select a caching I/O Group, you also
can add more I/O Groups as Accessible I/O Groups.
– OpenVMS only: Enter the user-defined identifier (UDID) for OpenVMS. You must
complete only this field for the OpenVMS system.
704 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-96 Create Volume: Advanced settings and Characteristics
On the Capacity Management tab (Figure 10-97), you can set the following options after
you activate thin provisioning by selecting Thin-Provisioned:
– Real Capacity: Enter the real size that you want to allocate. This size is the percentage
of the virtual capacity or a specific number in GiB of the disk space that is allocated.
– Automatically Expand: Select to allow the real disk size to grow, as required.
– Warning Threshold: Enter a percentage of the virtual volume capacity for a threshold
warning. A warning message is generated when the used disk capacity on the
space-efficient copy first exceeds the specified threshold.
– Thin-Provisioned Grain Size: Select the grain size: 32 KiB, 64 KiB, 128 KiB, or
256 KiB. Smaller grain sizes save space. Larger grain sizes produce better
performance. Try to match the FlashCopy grain size if the volume is used for
FlashCopy.
Important: Compressed and uncompressed volumes must not be mixed within the
same pool.
Chapter 10. SAN Volume Controller operations using the GUI 705
Figure 10-98 Create Volume: Advanced settings
For more information about the Real-time Compression feature, see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859, and
Implementing IBM Real-time Compression in SAN Volume Controller and IBM Storwize
V7000, TIPS1083.
On the Mirroring tab (Figure 10-99), you can set the Mirror Sync Rate option after you
activate mirroring by selecting Create Mirrored Copy. Enter the Mirror Sync Rate, which
is the I/O governing rate, by using a percentage that determines how quickly copies are
synchronized. A zero value disables synchronization.
Important: When you activate this feature from the Advanced menu, you must select a
secondary pool in the main window (Figure 10-95 on page 704). The primary pool is
used as the primary and preferred copy for read operations. The secondary pool is
used as the secondary copy.
7. After you set all of the advanced settings, click OK to return to the main menu
(Figure 10-95 on page 704).
8. You can choose to create only the volumes by clicking Create, or you can create and map
the volumes by selecting Create and Map to Host.
If you select to create only the volumes, you are returned to the Volumes panel. You see
that your volumes were created but not mapped, as shown in Figure 10-100. You can map
them later.
706 Implementing the IBM System Storage SAN Volume Controller V7.4
If you want to create and map the volumes on the volume creation window, click Continue
after the task finishes and another window opens. In the Modify Host Mappings window,
select the I/O Group and host to which you want to map these volumes by using the
drop-down menu (as shown Figure 10-101) and you are automatically directed to the host
mapping table.
Figure 10-101 Modify Host Mappings: Select the host to which to map your volumes
In the Modify Host Mappings window, verify the mapping. If you want to modify the
mapping, select the volume or volumes that you want to map to a host and move each of
them to the table on the right by clicking the two greater than symbols (>>), as shown in
Figure 10-102. If you must remove the mappings, click the two less than symbols (<<).
After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships and finalize the creation of the volumes.
You return to the main Volumes panel. You can see that your volumes were created and
mapped, as shown in Figure 10-103.
Chapter 10. SAN Volume Controller operations using the GUI 707
10.8.3 Renaming a volume
Complete the following steps to rename a volume:
1. In the table, select the volume that you want to rename.
2. Click Actions → Rename, as shown in Figure 10-104.
Tip: Two other ways are available to rename a volume. You can right-click a volume and
select Rename, or you can use the method that is described in Figure 10.8.4 on
page 708.
3. In the Rename Volume window, enter the new name that you want to assign to the volume
and click Rename, as shown in Figure 10-105.
Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The volume name can be 1 - 63 characters.
708 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-106 Properties action
3. In the Overview tab, click Edit to modify the parameters for this volume, as shown in
Figure 10-107 on page 710.
From this window, you can modify the following parameters:
– Volume Name: For the volume name, You can use the letters A - Z and a - z, the
numbers 0 - 9, and the underscore (_) character. The host name can be 1 - 63
characters.
– Accessible I/O Group: You can select another I/O Group from the list to add an
additional I/O Group to the existing I/O Group for this volume.
– Mirror Sync Rate: Change the Mirror Sync rate, which is the I/O governing rate, by
using a percentage that determines how quickly copies are synchronized. A zero value
disables synchronization.
– Cache Mode: Change the caching policy of a volume. Caching policy can be set to
Enabled (read/write caching enabled), Disabled (no caching enabled), or Read Only
(only read caching enabled).
– OpenVMS: Enter the UDID (OpenVMS). This field must be completed for an OpenVMS
system only.
Chapter 10. SAN Volume Controller operations using the GUI 709
Figure 10-107 Modify a volume
In addition to the properties that you can modify by following the instructions in Figure 10.8.4
on page 708, other properties are specific to thin-provisioned or compressed volumes that
you can modify by completing the following steps:
1. Depending on whether the volume is non-mirrored or mirrored, complete one of the
following steps:
– For a non-mirrored volume: Select the volume by clicking Actions → Volume Copy
Actions → Thin-Provisioned (or Compressed) → Edit Properties, as shown in
Figure 10-108 on page 711.
710 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-108 Non-mirrored volume: Thin-Provisioned Edit Properties
Tip: You can also right-click the volume and select Volume Copy Actions →
Thin-Provisioned (or Compressed) → Edit Properties.
– For a mirrored volume: Select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify.
Click Actions, and then click Thin-Provisioned (or Compressed) → Edit Properties,
as shown in Figure 10-109.
Tip: You can also right-click the thin-provisioned copy and select
Thin-Provisioned → Edit Properties.
Chapter 10. SAN Volume Controller operations using the GUI 711
2. The Edit Properties - volumename (Copy #), (where volumename is the volume that you
selected in the previous step) window opens, as shown in Figure 10-110. In this window,
you can modify the following volume characteristics:
– Warning Threshold: Enter a percentage. This function generates a warning when the
used disk capacity on the thin-provisioned or compressed copy first exceeds the
specified threshold.
– Enable Autoexpand: Autoexpand allows the real disk size to grow, as required,
automatically.
GUI: You also can modify the real size of your thin-provisioned or compressed volume by
using the GUI, depending on your needs. For more information, see 10.8.11, “Shrinking
the real capacity of a thin-provisioned or compressed volume” on page 721, or 10.8.12,
“Expanding the real capacity of a thin-provisioned or compressed volume” on page 723.
712 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-111 Delete a volume action
3. The Delete Volume window opens, as shown in Figure 10-112. In the “Verify the number of
volumes that you are deleting” field, enter a value for the number of volumes that you want
to remove. This verification helps you to avoid deleting the wrong volumes.
Important: Deleting a volume is a destructive action for any user data on that volume.
A volume cannot be deleted if the SVC records any I/O activity on the volume during the
defined past time interval.
If you still have a volume that is associated with a host that is used with FlashCopy or
remote copy, and you want to delete the volume, select Delete the volume even if it has
host mappings or is used in FlashCopy mappings or remote-copy relationships.
Then, click Delete, as shown in Figure 10-112.
Chapter 10. SAN Volume Controller operations using the GUI 713
Note: You also can delete a mirror copy of a mirrored volume. For information about
deleting a mirrored copy, see 10.8.15, “Deleting a mirrored copy from a volume mirror”
on page 729.
Important: Before you delete a host mapping, ensure that the host is no longer using the
volume. Unmapping a volume from a host does not destroy the volume contents.
Unmapping a volume has the same effect as powering off the computer without first
performing a clean shutdown; therefore, the data on the volume might end up in an
inconsistent state. Also, any running application that was using the disk receives I/O errors
and might not recover until a forced application or server reboot.
3. In the Properties window, click the Host Maps tab, as shown in Figure 10-114 on
page 715.
714 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-114 Volume Details: Host Maps tab
Alternative: You also can access this window by selecting the volume in the table and
clicking View Mapped Hosts in the Actions menu, as shown in Figure 10-115 on
page 715.
Chapter 10. SAN Volume Controller operations using the GUI 715
Figure 10-116 Volume Details: Unmap Host
7. Click Unmap to remove the host mapping or mappings. You are returned to the Host Maps
window. Click Refresh to verify the results of the unmapping action, as shown in
Figure 10-117.
716 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-118 Unmap All Hosts from Actions menu
Tip: You can also right-click a volume and select Unmap All Hosts.
3. In the “Verify the number of mappings that this operation affects” field in the Unmap from
Hosts window (Figure 10-119), enter the number of host objects that you want to remove.
This verification helps you to avoid deleting the wrong host objects.
4. Click Unmap to remove the host mapping or mappings. You are returned to the All
Volumes panel.
Although shrinking a volume is an easy task by using the SVC, ensure that your operating
system supports shrinking (natively or by using third-party tools) before you use this function.
Chapter 10. SAN Volume Controller operations using the GUI 717
In addition, the preferred practice is to always have a consistent backup before you attempt to
shrink a volume.
Important: For thin-provisioned or compressed volumes, the use of this method to shrink a
volume results in shrinking its virtual capacity. For more information about shrinking its real
capacity, see 10.8.11, “Shrinking the real capacity of a thin-provisioned or compressed
volume” on page 721.
Assuming that your operating system supports it, perform the following steps to shrink a
volume:
1. Perform any necessary steps on your host to ensure that you are not using the space that
you are about to remove.
2. In the volume table, select the volume that you want to shrink. Click Actions → Shrink, as
shown in Figure 10-120.
3. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-121 on page 719.
You can enter how much you want to shrink the volume by using the Shrink by field or you
can directly enter the final size that you want to use for the volume by using the Final size
field. The other field is computed automatically. For example, if you have a 10 GiB volume
and you want it to become 6 GiB, you can specify 4 GiB in the Shrink by field or you can
directly specify 6 GiB in the Final size field, as shown in Figure 10-121 on page 719.
718 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-121 Shrinking a volume
4. When you are finished, click Shrink and the changes are visible on your host.
Dynamic expansion of a volume is supported only when the volume is in use by one of the
following operating systems:
AIX 5L V5.2 and higher
Windows Server 2008, and Windows Server 2012 for basic and dynamic disks
Windows Server 2003 for basic disks, and Windows Server 2003 with Microsoft hot fix
(Q327020) for dynamic disks
Important: For thin-provisioned volumes, the use of this method results in expanding its
virtual capacity. For more information about expanding its real capacity, see 10.8.12,
“Expanding the real capacity of a thin-provisioned or compressed volume” on page 723.
If your operating system supports expanding a volume, complete the following steps:
1. In the table, select the volume that you want to expand.
2. Click Actions → Expand, as shown in Figure 10-122 on page 720.
Chapter 10. SAN Volume Controller operations using the GUI 719
Figure 10-122 Expand volume action
3. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-123 on page 721.
You can enter how much you want to enlarge the volume by using the Expand by field, or
you can directly enter the final size that you want to use for the volume by using the Final
size field. The other field is computed automatically.
For example, if you have a 10 GiB volume and you want it to become 15 GiB, you can
specify 5 GiB in the Expand by field or you can directly specify 15 GiB in the Final size
field, as shown in Figure 10-123 on page 721. The maximum final size shows 42 GiB for
the volume.
720 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-123 Expanding a volume
Chapter 10. SAN Volume Controller operations using the GUI 721
Figure 10-124 Non-mirrored volume: Thin-Provisioned Shrink action
Tip: You can also right-click the volume and select Volume Copy Actions →
Thin-Provisioned (or Compressed) → Shrink.
– For a mirrored volume, select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify and click Actions → Thin-Provisioned (or
Compressed) → Shrink, as shown in Figure 10-125.
Tip: You can also right-click the thin-provisioned or compressed mirrored copy and
select Thin-Provisioned (or Compressed) → Shrink.
2. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-126 on page 723.
You can enter the amount by which you want to shrink the volume by using the Shrink by
field, or you can enter the final real capacity directly that you want to use for the volume by
using the Final real capacity field. The other field is computed automatically.
For example, if you have a current real capacity equal to 323.2 MiB and you want a final
real size that is equal to 200 MiB, you can specify 123.2 MiB in the Shrink by field, or you
can directly specify 200 MiB in the Final real capacity field, as shown in Figure 10-126 on
page 723.
3. When you are finished, click Shrink, as shown in Figure 10-126 on page 723, and the
changes become visible to your host.
722 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-126 Shrink Volume real capacity window
To expand the real size of a thin-provisioned or compressed volume, complete the following
steps:
1. Depending on the case, use one of the following actions:
– For a non-mirrored volume, select the thin-provisioned or compressed volume and click
Actions → Volume Copy Actions → Thin-Provisioned (or Compressed) →
Expand, as shown in Figure 10-127 on page 724.
Chapter 10. SAN Volume Controller operations using the GUI 723
Figure 10-127 Non-mirrored volume: Compressed Expand action
Tip: You can also right-click the volume and select Volume Copy Actions →
Thin-Provisioned (or Compressed) → Expand.
– For a mirrored volume, select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify, and click Actions → Thin-Provisioned (or
Compressed) → Expand, as shown in Figure 10-128.
Tip: You can also right-click the thin-provisioned or compressed copy and select
Thin-Provisioned (or Compressed) → Expand.
2. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-129 on page 725.
You can enter the amount by which you want to expand the volume by using the Expand
by field, or you can enter the final real capacity size directly that you want to use for the
volume by using the Final real capacity field. The other field is computed automatically.
For example, if you have a current real capacity equal to 200 MiB and you want a final real
size equal to 700 MiB, you can specify 500 MiB in the Expand by field or you can directly
specify 700 MiB in the Final real capacity field, as shown in Figure 10-129 on page 725.
3. When you are finished, click Expand, as shown in Figure 10-129 on page 725, and the
changes become visible on your host.
724 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-129 Expand Volume real capacity window
Tip: You can also right-click a volume and select Migrate to Another Pool.
3. The Migrate Volume Copy window opens, as shown in Figure 10-131 on page 726. Select
the storage pool to which you want to reassign the volume. You are presented with a list of
only the storage pools with the same extent size.
4. When you finish making your selections, click Migrate to begin the migration process.
Chapter 10. SAN Volume Controller operations using the GUI 725
Figure 10-131 Migrate Volume Copy window
5. You can check the progress of the migration by using the Running Tasks status area, as
shown in Figure 10-132.
To expand this area, click the icon, and then click Migration. Figure 10-133 shows a
detailed view of the running tasks.
When the migration is finished, the volume is part of the new pool.
726 Implementing the IBM System Storage SAN Volume Controller V7.4
10.8.14 Adding a mirrored copy to an existing volume
You can add a mirrored copy to an existing volume, which provides two copies of the
underlying disk extents.
Tip: You can also create a mirrored volume by selecting the Mirror or Thin Mirror preset
during the volume creation, as shown in Figure 10-94 on page 703.
You can use a volume mirror for any operation for which you can use a volume. It is not
apparent to higher-level operations, such as Metro Mirror, Global Mirror, or FlashCopy.
Creating a volume mirror from an existing volume is not restricted to the same storage pool;
therefore, this method is ideal to use to protect your data from a disk system or an array
failure. If one copy of the mirror fails, it provides continuous data access to the other copy.
When the failed copy is repaired, the copies automatically resynchronize.
You can also use a volume mirror as an alternative migration tool, where you can synchronize
the mirror before splitting off the original side of the mirror. The volume stays online, and it can
be used normally while the data is being synchronized. The copies can also be separate
structures: striped, image, sequential, or space-efficient, and separate extent sizes.
To create a mirror copy from within a volumes panel, complete the following steps:
1. In the table, select the volume to which you want to add a mirrored copy.
2. Click Actions → Volume Copy Actions → Add Mirrored Copy, as shown in
Figure 10-134.
Tip: You can also right-click a volume and select Volume Copy Actions → Add
Mirrored Copy.
Chapter 10. SAN Volume Controller operations using the GUI 727
3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-135. You can perform the
following steps separately or in combination:
a. Select the storage pool in which you want to put the copy. To maintain higher
availability, choose a separate group.
b. For the Volume type, select Thin-Provisioned to make the copy space-efficient.
The following parameters are used for this thin-provisioned copy:
• Real Size: 2% of Virtual Capacity
• Enable Autoexpand: Active
• Warning Threshold: 80% of Virtual Capacity
• Thin-Provisioned Grain Size: 256 KB
Changing options: You can change only Real Size, Enable Autoexpand, and
Warning Threshold after the thin-provisioned volume copy is added.
For more information about modifying the real size of your thin-provisioned volume,
see 10.8.11, “Shrinking the real capacity of a thin-provisioned or compressed
volume” on page 721, and 10.8.12, “Expanding the real capacity of a
thin-provisioned or compressed volume” on page 723.
728 Implementing the IBM System Storage SAN Volume Controller V7.4
5. You can check the migration by using the Running Tasks menu, as shown in
Figure 10-136. To expand this Status Area, click the icon and click Volume
Synchronization.
6. When the synchronization is finished, the volume is part of the new pool, as shown in
Figure 10-137.
Primary copy: As shown in Figure 10-137, the primary copy is identified with an
asterisk (*). In this example, Copy 0 is the primary version (copy) and Copy 1* is the
copy.
Chapter 10. SAN Volume Controller operations using the GUI 729
Figure 10-138 Delete this Copy action
Tip: You can also right-click a volume and select Delete this Copy.
2. The Warning window opens, as shown in Figure 10-139. Click Yes to confirm.
If the volume that you intend to delete is a primary copy and the secondary copy is not yet
synchronized, the attempt fails and you must wait until the synchronization completes.
Tip: You can also right-click a volume and select Split into New Volume.
2. The Split Volume Copy window opens, as shown in Figure 10-141 on page 731. In this
window, enter a name for the new volume.
730 Implementing the IBM System Storage SAN Volume Controller V7.4
Volume name: If you do not provide a name, the SVC automatically generates the
name vdiskx (where x is the ID sequence number that is assigned by the SVC
internally). If you want to provide a name, you can use the letters A - Z and a - z, the
numbers 0 - 9, and the underscore (_) character. The host name can be 1 - 63
characters.
Important: After you split a volume mirror, you cannot resynchronize or recombine them.
You must create a new volume copy.
Chapter 10. SAN Volume Controller operations using the GUI 731
2. The Validate Volume Copies window opens, as shown in Figure 10-143. In this window,
select one of the following options:
– Generate Event of Differences: Use this option if you want to verify only that the
mirrored volume copies are identical. If a difference is found, the command stops and
logs an error that includes the logical block address (LBA) and the length of the first
difference. Starting at a separate LBA each time, you can use this option to count the
number of differences on a volume.
– Overwrite Differences: Use this option to overwrite the content from the primary volume
copy to the other volume copy. The command corrects any differing sectors by copying
the sectors from the primary copy to the copies that are compared. Upon completion,
the command process logs an event, which indicates the number of differences that
were corrected. Use this option if you are sure that the primary volume copy data is
correct or that your host applications can handle incorrect data.
– Return Media Error to Host: Use this option to convert sectors on all volume copies that
contain different contents into virtual medium errors. Upon completion, the command
logs an event, which indicates the number of differences that were found, the number
of differences that were converted into medium errors, and the number of differences
that were not converted. Use this option if you are unsure what data is correct, and you
do not want an incorrect version of the data to be used.
732 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-144 Add Mirrored Copy actions
Tip: You can also right-click a volume and select Volume Copy Actions → Add
Mirrored Copy.
3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-145 on page 734. You can
perform the following steps separately or in combination:
a. Select the storage pool in which you want to put the copy. To maintain higher
availability, choose a separate group.
b. For the Volume Type, select Thin-Provisioned to make the copy space-efficient.
The following parameters are used for this thin-provisioned copy:
• Real Size: 2% of Virtual Capacity
• Autoexpand: Active
• Warning Threshold: 80% of Virtual Capacity
• Thin-Provisioned Grain Size: 256 KB
Changing options: You can change Real Size, Autoexpand, and Warning
Threshold after the volume copy is added in the GUI. To change Thin-Provisioned
Grain Size, you must use the CLI.
Chapter 10. SAN Volume Controller operations using the GUI 733
Figure 10-145 Add Volume Copy window
5. You can check the migration by using the Running Tasks status area menu, as shown in
Figure 10-132 on page 726.
To expand this status area, click the icon and click Volume Synchronization.
Figure 10-146 shows the detailed view of the running tasks.
Mirror Sync Rate: You can change the Mirror Sync Rate (the default is 50%) by
modifying the volume properties. For more information, see Figure 10.8.4 on page 708.
6. When the synchronization is finished, in the table, select the original, non-thin-provisioned
copy that you want to remove. Select Actions → Delete this Copy, as shown in
Figure 10-147.
734 Implementing the IBM System Storage SAN Volume Controller V7.4
Tip: You can also right-click a volume and select Delete this Copy.
7. The Warning window opens, as shown in Figure 10-148. Click Yes to confirm your choice.
Tip: If you try to remove the primary copy before it is completely synchronized with the
other copy, you receive the following message:
The command failed because the copy specified is the only synchronized copy.
You must wait until the end of the synchronization to remove the primary copy.
When the copy is deleted, your thin-provisioned volume is ready for use and automatically
mapped to the original host.
Chapter 10. SAN Volume Controller operations using the GUI 735
Copy Services: For more information about the functionality of Copy Services in the SVC
environment, see Chapter 8, “Advanced Copy Services” on page 405.
In this section, we describe the tasks that you can perform at a FlashCopy level. The following
methods can be used to visualize and manage your FlashCopy:
Use the SVC Overview panel. Move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy, as shown in Figure 10-149.
In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a
target volume. Any data that existed on the target volume is lost and that data is replaced
by the copied data.
Use the Consistency Groups panel, as shown in Figure 10-150. A Consistency Group is a
container for mappings. You can add many mappings to a Consistency Group.
736 Implementing the IBM System Storage SAN Volume Controller V7.4
Use the FlashCopy Mappings panel, as shown in Figure 10-151. A FlashCopy mapping
defines the relationship between a source volume and a target volume.
2. Select the volume for which you want to create the FlashCopy relationship, as shown in
Figure 10-153 on page 738.
Chapter 10. SAN Volume Controller operations using the GUI 737
Figure 10-153 FlashCopy mapping: Select the volume (or volumes)
Depending on whether you created the target volumes for your FlashCopy mappings or you
want the SVC to create the target volumes for you, the following options are available:
If you created the target volumes, see “Using existing target volumes” on page 738.
If you want the SVC to create the target volumes for you, see “Creating target volumes” on
page 742.
2. The Create FlashCopy Mapping window opens (Figure 10-155 on page 739). In this
window, you must create the relationship between the source volume (the disk that is
copied) and the target volume (the disk that receives the copy). A mapping can be created
between any two volumes inside an SVC clustered system. Select a source volume and a
target volume for your FlashCopy mapping, and then click Add. If you must create other
copies, repeat this step.
Important: The source volume and the target volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source volume.
738 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-155 Create a FlashCopy Mapping by using an existing target volume
Volumes: The volumes do not have to be in the same I/O Group or storage pool.
3. Click Next after you create all of the relationships that you need, as shown in
Figure 10-156.
4. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 10-157 on page 740:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates a replica of the source volume on a target volume. The copy can be
changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
source and target volumes.
Chapter 10. SAN Volume Controller operations using the GUI 739
Figure 10-157 Create FlashCopy Mapping window
For each preset, you can customize various advanced options. You can access these
settings by clicking Advanced Settings.
5. The advanced setting options are shown in Figure 10-158.
If you prefer not to customize these settings, go directly to step 6 on page 741.
You can customize the following advanced setting options, as shown in Figure 10-158:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
740 Implementing the IBM System Storage SAN Volume Controller V7.4
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Or, if you do not want to include this FlashCopy mapping in a Consistency Group, select
No, do not add the mappings to a consistency group.
Click Finish, as shown in Figure 10-160.
Chapter 10. SAN Volume Controller operations using the GUI 741
7. Check the result of this FlashCopy mapping. For each FlashCopy mapping relationship
that was created, a mapping name is automatically generated that starts with fcmapX,
where X is the next available number. If needed, you can rename these mappings, as
shown in Figure 10-161. For more information, see 10.9.11, “Renaming a FlashCopy
mapping” on page 759.
Target volume naming: If the target volume does not exist, the target volume is
created. The target volume name is based on its source volume and a generated
number at the end, for example, source_volume_name_XX, where XX is a number that
was generated dynamically.
2. In the Create FlashCopy Mapping window (Figure 10-163 on page 743), you must select
one FlashCopy preset. The GUI provides the following presets to simplify common
FlashCopy operations:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.
742 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-163 Create FlashCopy Mapping window
For each preset, you can customize various advanced options. To access these settings,
click Advanced Settings. The Advanced Setting options show in Figure 10-164.
If you prefer not to customize these advanced settings, go directly to step 3 on page 744.
You can customize the advanced setting options that are shown in Figure 10-164:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Chapter 10. SAN Volume Controller operations using the GUI 743
Incremental FlashCopy mapping: Even if the type of the FlashCopy mapping is
incremental, the first copy process copies all of the data from the source volume to
the target volume.
Figure 10-165 Selecting the option to add the mappings to a Consistency Group
4. The next window shows the volume capacity management dialog. Choose one of the four
options based on the capacity preset that you want to use with your target volume
(Figure 10-166). Here, you can decide whether the target volume manages capacity in a
generic, thin-provisioned, or compressed manner, or whether the target volume inherits its
capacity properties from the source volume.
744 Implementing the IBM System Storage SAN Volume Controller V7.4
If you select thin-provisioning as the method to manage the capacity of your target volume,
you can set up the following parameters (as shown in Figure 10-167):
– Real Capacity: Enter the real size that you want to allocate. This size is the amount of
disk space that is allocated, which can be a percentage of the virtual size or a specific
number in GBs.
– Automatically Expand: Select auto expand and the real disk size can grow, as required.
– Warning Threshold: Enter a percentage or select a specific size for the usage threshold
warning.
Figure 10-167 Create FlashCopy mapping capacity management for the thin-provisioning preset
Similarly, if you want to use the compression preset, you can configure the real capacity,
auto-expand, and warning threshold setting on the target volume, as shown in
Figure 10-168.
Figure 10-168 Create FlashCopy mapping capacity management for the compression preset
Chapter 10. SAN Volume Controller operations using the GUI 745
5. In the next window (Figure 10-169), select the storage pool that is used to automatically
create targets. You can choose to use the same storage pool that is used by the source
volume. Or, you can select a storage pool from a list. Click Finish.
6. Check the result of this FlashCopy mapping, as shown in Figure 10-170. For each
FlashCopy mapping relationship that is created, a mapping name is automatically
generated that starts with fcmapX where X is the next available number. If necessary, you
can rename these mappings, as shown in Figure 10-170. For more information, see
10.9.11, “Renaming a FlashCopy mapping” on page 759.
Tip: You can start FlashCopy from the SVC GUI. However, the use of the SVC GUI might
be impractical if you plan to handle many FlashCopy mappings or Consistency Groups
periodically, or at varying times. In these cases, creating a script by using the CLI might be
more convenient.
746 Implementing the IBM System Storage SAN Volume Controller V7.4
10.9.2 Single-click snapshot
The snapshot creates a point-in-time backup of production data. The snapshot is not intended
to be an independent copy. Instead, it is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.
Snapshot uses the following preset parameters:
Background copy: No
Incremental: No
Delete after completion: No
Cleaning rate: No
Primary copy source pool: Target pool
3. A volume is created as a target volume for this snapshot in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column Status area, as shown in
Figure 10-172.
Chapter 10. SAN Volume Controller operations using the GUI 747
10.9.3 Single-click clone
The clone preset creates an exact replica of the volume, which can be changed without
affecting the original volume. After the copy completes, the mapping that was created by the
preset is automatically deleted.
4. A volume is created as a target volume for this clone in the same pool as the source
volume. The FlashCopy mapping is created and started. You can check the FlashCopy
progress in the Progress column or in the Running Tasks Status column. After the
FlashCopy clone is created, the mapping is removed and the new cloned volume becomes
available, as shown in Figure 10-174.
748 Implementing the IBM System Storage SAN Volume Controller V7.4
10.9.4 Single-click backup
The backup creates a point-in-time replica of the production data. After the copy completes,
the backup view can be refreshed from the production data, with minimal copying of data from
the production volume to the backup volume.
4. A volume is created as a target volume for this backup in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column, as shown in
Figure 10-176, or in the Running Tasks Status column.
Chapter 10. SAN Volume Controller operations using the GUI 749
10.9.5 Creating a FlashCopy Consistency Group
To create a FlashCopy Consistency Group in the SVC GUI, complete the following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups. The Consistency Groups panel opens, as
shown in Figure 10-177.
2. Click Create Consistency Group and enter the FlashCopy Consistency Group name that
you want to use and click Create (Figure 10-178).
Consistency Group name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The volume name can be 1 - 63 characters.
750 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-179 New Consistency Group
3. If you select a new Consistency Group, click Actions → Create FlashCopy Mapping, as
shown in Figure 10-181.
Chapter 10. SAN Volume Controller operations using the GUI 751
4. If you did not select a Consistency Group, click Create FlashCopy Mapping, as shown in
Figure 10-182.
5. The Create FlashCopy Mapping window opens, as shown in Figure 10-183. In this
window, you must create the relationships between the source volumes (the volumes that
are copied) and the target volumes (the volumes that receive the copy). A mapping can be
created between any two volumes in a clustered system.
Important: The source volume and the target volume must be of equal size.
Tip: The volumes do not have to be in the same I/O Group or storage pool.
6. Select a volume in the Source Volume column by using the drop-down list. Then, select a
volume in the Target Volume column by using the drop-down list. Click Add, as shown in
Figure 10-183. Repeat this step to create other relationships.
To remove a relationship that was created, click .
Important: The source and target volumes must be of equal size. Therefore, only the
targets with the appropriate size are shown for a source volume.
752 Implementing the IBM System Storage SAN Volume Controller V7.4
7. Click Next after all of the relationships that you want to create are shown (Figure 10-184).
Figure 10-184 Create FlashCopy Mapping with the relationships that were created
8. In the next window, you must select one FlashCopy preset. The GUI provides the following
presets to simplify common FlashCopy operations, as shown in Figure 10-185:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.
Whichever preset you select, you can customize various advanced options. To access
these settings, click Advanced Settings.
If you prefer not to customize these settings, go directly to step 9.
Chapter 10. SAN Volume Controller operations using the GUI 753
You can customize the following advanced setting options, as shown in Figure 10-186:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Incremental copies: Even if the type of the FlashCopy mapping is incremental, the
first copy process copies all of the data from the source volume to the target volume.
9. If you do not want to create these FlashCopy mappings from a Consistency Group (step 3
on page 751), you must confirm your choice by selecting No, do not add the mappings
to a consistency group, as shown in Figure 10-187 on page 755.
754 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-187 Do not add the mappings to a Consistency Group
10.Click Finish.
11.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown
in Figure 10-188.
For each FlashCopy mapping relationship that you created, a mapping name is
automatically generated that starts with fcmapX where X is an available number. If
necessary, you can rename these mappings. For more information, see 10.9.11,
“Renaming a FlashCopy mapping” on page 759.
Tip: You can start FlashCopy from the SVC GUI. However, if you plan to handle many
FlashCopy mappings or Consistency Groups periodically, or at varying times, creating a
script by using the operating system shell CLI might be more convenient.
Tip: You can also right-click a FlashCopy mapping and select Show Related Volumes.
Chapter 10. SAN Volume Controller operations using the GUI 755
Figure 10-189 Show Related Volumes
In the Related Volumes window (Figure 10-190), you can see the related mapping for a
volume. If you click one of these volumes, you can see its properties. For more information
about volume properties, see 10.8.1, “Volume information” on page 700.
Tip: You can also right-click a FlashCopy mapping and select Move to Consistency
Group.
756 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-191 Move to Consistency Group action
4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for this FlashCopy mapping by using the drop-down list (Figure 10-192).
Tip: You can also right-click a FlashCopy mapping and select Remove from
Consistency Group.
Chapter 10. SAN Volume Controller operations using the GUI 757
In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 10-194.
Tip: You can also right-click a FlashCopy mapping and select Edit Properties.
4. In the Edit FlashCopy Mapping window, you can modify the following parameters for a
selected FlashCopy mapping, as shown in Figure 10-196 on page 759:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.
758 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-196 Edit FlashCopy Mapping
Tip: You can also right-click a FlashCopy mapping and select Rename Mapping.
4. In the Rename FlashCopy Mapping window, enter the new name that you want to assign
to the FlashCopy mapping and click Rename, as shown in Figure 10-198 on page 760.
Chapter 10. SAN Volume Controller operations using the GUI 759
Figure 10-198 Renaming a FlashCopy mapping
FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.
3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 10-200 on page 761.
760 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-200 Changing the name for a Consistency Group
Consistency Group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.
The new Consistency Group name is displayed in the Consistency Group panel.
Tip: You can also right-click a FlashCopy mapping and select Delete Mapping.
Chapter 10. SAN Volume Controller operations using the GUI 761
4. The Delete FlashCopy Mapping window opens, as shown in Figure 10-202. In the “Verify
the number of FlashCopy mappings that you are deleting” field, you must enter the
number of volumes that you want to remove. This verification was added to help avoid
deleting the wrong mappings.
If you still have target volumes that are inconsistent with the source volumes and you want
to delete these FlashCopy mappings, select Delete the FlashCopy mapping even when
the data on the target volume is inconsistent, or if the target volume has other
dependencies.
Click Delete, as shown in Figure 10-202.
Important: Deleting a Consistency Group does not delete the FlashCopy mappings.
762 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-203 Delete Consistency Group action
Tip: You can also right-click a FlashCopy mapping and select Start.
Chapter 10. SAN Volume Controller operations using the GUI 763
4. You can check the FlashCopy progress in the Progress column of the table or in the
Running Tasks status area. After the task completes, the FlashCopy mapping status is in a
Copied state, as shown in Figure 10-206.
Important: Stop a FlashCopy copy process only when the data on the target volume is
useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by the SVC.
764 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-208 FlashCopy Mapping status
In this section, we describe the tasks that you can perform at a remote copy level.
The following panels are used to visualize and manage your remote copies:
The Remote Copy panel, as shown in Figure 10-209.
By using the Metro Mirror and Global Mirror Copy Services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same SVC clustered
system or on two separate SVC systems.
To access the Remote Copy panel, move the mouse pointer over the Copy Services
selection and click Remote Copy.
Chapter 10. SAN Volume Controller operations using the GUI 765
10.10.1 System partnership
You can create more than a one-to-one system partnership that uses Fibre Channel, FCoE,
or IP. You can have a system partnership among multiple SVC clustered systems. You can
use this partnership to create the following types of configurations that use a maximum of four
connected SVC systems:
Star configuration, as shown in Figure 10-211.
766 Implementing the IBM System Storage SAN Volume Controller V7.4
Fully connected configuration, as shown in Figure 10-213.
Important: All SVC clustered systems must be at level 5.1 or higher. A system can be
partnered with up to three remote systems. No more than four systems can be in the same
connected set. Only one IP partnership is supported.
Intra-cluster Metro Mirror: If you are creating an intra-cluster Metro Mirror, do not perform
this next step to create the SVC clustered system Metro Mirror partnership. Instead, go to
10.10.4, “Creating stand-alone remote copy relationships” on page 772.
To create an FC partnership between the SVC systems by using the GUI, complete the
following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click Partnerships. The Partnerships panel opens, as shown in Figure 10-216
on page 768.
Chapter 10. SAN Volume Controller operations using the GUI 767
Figure 10-215 Selecting Partnerships window
2. Click Create Partnership to create a partnership with another SVC system, as shown in
Figure 10-216.
3. In the Create Partnership window (Figure 10-217), complete the following information:
– Select the partnership type, either Fibre Channel or IP.
– Select an available partner system from the drop-down list, as shown in Figure 10-218
on page 769. If no candidate is available, the following error message is displayed:
This system does not have any candidates.
768 Implementing the IBM System Storage SAN Volume Controller V7.4
– Enter a link bandwidth (Mbps) that is used by the background copy process between
the systems in the partnership. Set this value so that it is less than or equal to the
bandwidth that can be sustained by the communication link between the systems. The
link must sustain any host requests and the rate of the background copy.
– Enter the background copy rate.
– Click OK to confirm the partnership relationship.
4. As shown in Figure 10-219, our partnership is in the Partially Configured state because
this work was performed only on one side of the partnership so far.
To fully configure the partnership between both systems, perform the same steps on the
other SVC system in the partnership. For simplicity and brevity, we show only the two most
significant windows when the partnership is fully configured.
5. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership. We
specify the available bandwidth for the background copy (200 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows
(which are shown in Figure 10-220 and Figure 10-221 on page 770) confirm that our
remote system partnership is now in the Fully Configured state. (Figure 10-219 shows the
remote system ITSO SVC 5 from the local system ITSO SVC 3.)
Chapter 10. SAN Volume Controller operations using the GUI 769
Figure 10-221 shows the remote system ITSO SVC 3 from the local system ITSO SVC 5.
To create an IP partnership between SVC systems by using the GUI, complete the following
steps:
1. From the SVC System panel, move the mouse pointer over Copy Services and click
Partnerships. For type, select IP, as shown in Figure 10-222.
770 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-223 Create Partnership window for IP
As shown in Figure 10-224, our partnership is in the Partially Configured state because
only the work on one side of the partnership was completed so far.
To fully configure the partnership between both systems, we must perform the same steps
on the other SVC system in the partnership. For simplicity and brevity, we only show the
two most significant windows when the partnership is fully configured.
4. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership.
Specify the available bandwidth for the background copy (100 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows (as
shown in Figure 10-225 and Figure 10-226 on page 772) confirm that our remote system
partnership is now in the Fully Configured state. Figure 10-225 shows the remote system
ITSO SVC 5 from the local system ITSO SVC 3.
Figure 10-226 on page 772 shows the remote system ITSO SVC 3 from the local system
ITSO SVC 5.
Chapter 10. SAN Volume Controller operations using the GUI 771
Figure 10-226 System ITSO SVC 5: Fully configured remote partnership
Note: The bandwidth setting definition that is used when the IP partnership is created
changed. Previously, the bandwidth setting defaulted to 50 MBs, and it was the maximum
transfer rate from the primary site to the secondary site for initial volume sync or resync.
The link bandwidth setting is now configured by using Mbits not MBs and you set this link
bandwidth setting to a value that the communication link can sustain or what is allocated
for replication. The background copy rate setting is now a percentage of the link bandwidth
and it determines the bandwidth that is available for initial sync and resync or for Global
Mirror with Change Volumes.
3. In the Create Relationship window, select one of the following types of relationships that
you want to create (as shown in Figure 10-228 on page 773):
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
However, the copy might not contain the last few updates if a disaster recovery
operation is performed.
772 Implementing the IBM System Storage SAN Volume Controller V7.4
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume. Changes can
then be copied to the remote system asynchronously. The FlashCopy relationship
exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volume is for internal use. The user cannot
manipulate it as they can with a normal FlashCopy mapping. Most svctask *fcmap
commands fail.
Figure 10-228 Select the type of relationship that you want to create
Click Next.
Chapter 10. SAN Volume Controller operations using the GUI 773
5. In the next window, select the location of the auxiliary volumes, as shown in
Figure 10-230:
– On this system, which means that the volumes are local.
– On another system, which means that you select the remote system from the
drop-down list.
After you make a selection, click Next.
6. In the New Relationship window that is shown in Figure 10-231, you can create
relationships. Select a master volume in the Master drop-down list. Then, select an
auxiliary volume in the Auxiliary drop-down list for this master and click Add. If needed,
repeat this step to create other relationships.
Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list box for a specific source
volume.
774 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-232 Create the relationships between the master and auxiliary volumes
After all of the relationships that you want to create are shown, click Next.
8. Specify whether the volumes are synchronized, as shown in Figure 10-233. Then, click
Next.
9. In the last window, select whether you want to start to copy the data, as shown in
Figure 10-234. Click Finish.
Chapter 10. SAN Volume Controller operations using the GUI 775
10.Figure 10-235 shows that the task to create the relationship is complete.
The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status is Inconsistent Copying. You can check the copying progress in
the Running Tasks status area, as shown in Figure 10-236.
After the copy is finished, the relationship status changes to Consistent synchronized.
Figure 10-237 on page 777 shows the Consistent synchronized status.
776 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-237 Consistent copy of the mirrored volumes
3. Enter a name for the Consistency Group, and then, click Next, as shown in Figure 10-239.
Consistency Group name: If you do not provide a name, the SVC automatically
generates the name rccstgrpX, where X is the ID sequence number that is assigned by
the SVC internally. You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The Consistency Group name can be 1 - 15 characters. No
blanks are allowed.
Chapter 10. SAN Volume Controller operations using the GUI 777
4. In the next window, select where the auxiliary volumes are located, as shown in
Figure 10-240:
– On this system, which means that the volumes are local
– On another system, which means that you select the remote system in the drop-down
list
After you make a selection, click Next.
5. Select whether you want to add relationships to this group, as shown in Figure 10-241.
The following options are available:
– If you select Yes, click Next to continue the wizard and go to step 6.
– If you select No, click Finish to create an empty Consistency Group that can be used
later.
6. Select one of the following types of relationships to create, as shown in Figure 10-242 on
page 779:
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.
778 Implementing the IBM System Storage SAN Volume Controller V7.4
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated,
but the copy might not contain the last few updates if a disaster recovery operation is
performed.
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume.
Changes can then be copied to the remote system asynchronously. The FlashCopy
relationship exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volumes is for internal use. The user cannot
manipulate this type of mapping like a normal FlashCopy mapping.
Most svctask *fcmap commands fail.
Click Next.
Figure 10-242 Select the type of relationship that you want to create
7. As shown in Figure 10-243 on page 780, you can optionally select existing relationships to
add to the group. Click Next.
Note: To select multiple relationships, hold down Ctrl and click the entries that you want
to include.
Chapter 10. SAN Volume Controller operations using the GUI 779
Figure 10-243 Select existing relationships to add to the group
8. In the window that is shown in Figure 10-244, you can create relationships. Select a
volume in the Master drop-down list. Then, select a volume in the Auxiliary drop-down list
for this master. Click Add, as shown in Figure 10-244. Repeat this step to create other
relationships, if needed.
Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are displayed for a specific source volume.
To remove a relationship that was created, click (Figure 10-244). After all of the
relationships that you want to create are displayed, click Next.
Figure 10-244 Create relationships between the master and auxiliary volumes
780 Implementing the IBM System Storage SAN Volume Controller V7.4
9. Specify whether the volumes are already synchronized. Then, click Next, as shown in
Figure 10-245.
10.In the last window, select whether you want to start to copy the data. Then, click Finish, as
shown in Figure 10-246.
11.The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status of the relationships is Inconsistent copying. You can check the
copying progress in the Running Tasks status area, as shown in Figure 10-247 on
page 782.
Chapter 10. SAN Volume Controller operations using the GUI 781
Figure 10-247 Consistency Group created with relationship in copying and synchronized status
After the copies are completed, the relationships and the Consistency Group change to the
Consistent synchronized status.
3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 10-249 on page 783.
782 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-249 Changing the name for a Consistency Group
Consistency Group name: The Consistency Group name can consist of the letters
A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore (_) character. The
name can be 1 - 15 characters. However, the name cannot start with a number, dash,
or an underscore character. No blanks are allowed.
The new Consistency Group name is displayed on the Remote Copy panel.
Tip: You can also right-click a remote copy relationship and select Rename.
3. In the Rename Relationship window, enter the new name that you want to assign to the
FlashCopy mapping and click Rename, as shown in Figure 10-251 on page 784.
Chapter 10. SAN Volume Controller operations using the GUI 783
Figure 10-251 Renaming a remote copy relationship
Remote copy relationship name: You can use the letters A - Z and a - z, the numbers
0 - 9, and the underscore (_) character. The remote copy name can be 1 - 15
characters. No blanks are allowed.
Tip: You can also right-click a remote copy relationship and select Add to Consistency
Group.
5. In the Add Relationship to Consistency Group window, select the Consistency Group for
this remote copy relationship by using the drop-down list, as shown in Figure 10-253 on
page 785. Click Add to Consistency Group to confirm your changes.
784 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-253 Adding a relationship to a Consistency Group
Tip: You can also right-click a remote copy relationship and select Remove from
Consistency Group.
5. In the Remove Relationship From Consistency Group window, click Remove, as shown in
Figure 10-255 on page 786.
Chapter 10. SAN Volume Controller operations using the GUI 785
Figure 10-255 Remove a relationship from a Consistency Group
Tip: You can also right-click a relationship and select Start from the list.
5. If the relationship was not consistent, you can check the remote copy progress in the
Running Tasks status area, as shown in Figure 10-257 on page 787.
786 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-257 Checking the remote copy synchronization progress
6. After the task is complete, the remote copy relationship status has a Consistent
Synchronized state, as shown in Figure 10-258.
Chapter 10. SAN Volume Controller operations using the GUI 787
Figure 10-259 Remote Copy Consistency Groups view
3. Click Actions → Start (Figure 10-260) to start the remote copy Consistency Group.
4. You can check the remote copy Consistency Group progress, as shown in Figure 10-261.
5. After the task completes, the Consistency Group and all of its relationships are in a
Consistent Synchronized state, as shown in Figure 10-262 on page 789.
788 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-262 Consistent Synchronized Consistency Group
Important: When the copy direction is switched, no outstanding I/O can exist to the
volume that changes from primary to secondary because all I/O is inhibited to that volume
when it becomes the secondary. Therefore, careful planning is required before you switch
the copy direction for a remote copy relationship.
Chapter 10. SAN Volume Controller operations using the GUI 789
Figure 10-263 Switch copy direction action
5. The Warning window that is shown in Figure 10-264 opens. A confirmation is needed to
switch the remote copy relationship direction. The remote copy is switched from the
master volume to the auxiliary volume. Click Yes.
Figure 10-265 on page 791 shows the command-line output about this task.
790 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-265 Command-line output for switch relationship action
The copy direction is now switched, as shown in Figure 10-266. The auxiliary volume is
now accessible and shown as the primary volume. Also, the auxiliary volume is now
synchronized to the master volume.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all of the I/O is inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a Consistency Group.
Chapter 10. SAN Volume Controller operations using the GUI 791
Complete the following steps to switch a Consistency Group:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Select the Consistency Group that you want to switch.
3. Click Actions → Switch (as shown in Figure 10-267) to start the remote copy process.
4. The warning window that is shown in Figure 10-268 opens. A confirmation is needed to
switch the Consistency Group direction. In the example that is shown in Figure 10-268, the
Consistency Group is switched from the master group to the auxiliary group. Click Yes.
The remote copy direction is now switched, as shown in Figure 10-269 on page 793. The
auxiliary volume is now accessible and shown as a primary volume.
792 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-269 Checking Consistency Group synchronization direction
Tip: You can also right-click a relationship and select Stop from the list.
5. The Stop Remote Copy Relationship window opens, as shown in Figure 10-271 on
page 794. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Relationship.
Chapter 10. SAN Volume Controller operations using the GUI 793
Figure 10-271 Stop Remote Copy Relationship window
6. Figure 10-272 shows the command-line text for the stop remote copy relationship.
The new relationship status can be checked, as shown in Figure 10-273 on page 795. The
relationship is now Consistent Stopped.
794 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-273 Checking remote copy synchronization status
Tip: You can also right-click a relationship and select Stop from the list.
4. The Stop Remote Copy Consistency Group window opens, as shown in Figure 10-275 on
page 796. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Consistency Group.
Chapter 10. SAN Volume Controller operations using the GUI 795
Figure 10-275 Stop Remote Copy Consistency Group window
The new relationship status can be checked, as shown in Figure 10-276. The relationship
is now Consistent Stopped.
Multiple remote copy mappings: To select multiple remote copy mappings, hold down
Ctrl and click the entries that you want.
Tip: You can also right-click a remote copy mapping and select Delete Relationship.
796 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-277 Selecting the Delete Relationship option
4. The Delete Relationship window opens (Figure 10-278). In the “Verify the number of
relationships that you are deleting” field, enter the number of volumes that you want to
remove. This verification was added to help to avoid deleting the wrong relationships.
Click Delete, as shown in Figure 10-278.
Important: Deleting a Consistency Group does not delete its remote copy mappings.
Chapter 10. SAN Volume Controller operations using the GUI 797
Figure 10-279 Selecting the Delete Consistency Group option
4. The warning window that is shown in Figure 10-280 opens. Click Yes.
798 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-281 System status panel
On the System status panel (beneath the SVC nodes), you can view the global storage
usage, as shown in Figure 10-282. By using this method, you can monitor the physical
capacity and the allocated capacity of your SVC system. You can change between the
Allocation view and the Compression view to see the capacity usage and space savings of
the Real-time Compression feature, as shown in Figure 10-283.
Chapter 10. SAN Volume Controller operations using the GUI 799
10.11.2 View I/O Groups and their associated nodes
The System status panel shows an overview of the SVC system with its I/O Groups and their
associated nodes. As shown in Figure 10-284, the node status can be checked by using a
color code that represents its status.
You can click an individual node. You can right-click the node, as shown in Figure 10-285, to
open the list of actions.
If you click Properties, you see the following view, as shown in Figure 10-286 on page 801.
800 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-286 Properties for a node
Under View in the list of actions, you can see information about the Fibre Channel Ports, as
shown in Figure 10-287.
Left-click one node to see more details about a single node, as shown in Figure 10-289 on
page 802.
Chapter 10. SAN Volume Controller operations using the GUI 801
Figure 10-289 View of one node
802 Implementing the IBM System Storage SAN Volume Controller V7.4
10.11.4 Renaming the SAN Volume Controller clustered system
All objects in the SVC system have names that are user defined or system generated.
Choose a meaningful name when you create an object. If you do not choose a name for the
object, the system generates a name for you. A well-chosen name serves not only as a label
for an object, but also as a tool for tracking and managing the object. Choosing a meaningful
name is important if you decide to use configuration backup and restore.
Naming rules
When you choose a name for an object, the following rules apply:
Names must begin with a letter.
Important: Do not start names by using an underscore (_) character even though it is
possible. The use of the underscore as the first character of a name is a reserved
naming convention that is used by the system configuration restore process.
To rename the system from the System panel, complete the following steps:
1. Click Actions in the upper-left corner of the SVC System panel, as shown in
Figure 10-291.
2. From the panel, select Rename System, as shown in Figure 10-292 on page 804.
Chapter 10. SAN Volume Controller operations using the GUI 803
Figure 10-292 Select Rename System
3. The panel opens, as shown in Figure 10-293. Specify a new name for the system and click
Rename.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.
4. The Warning window opens, as shown in Figure 10-294 on page 805. If you are using the
iSCSI protocol, changing the system name or the iSCSI Qualified Name (IQN) also
changes the IQN of all of the nodes in the system. Changing the system name or the IQN
might require the reconfiguration of all iSCSI-attached hosts. This reconfiguration might be
required because the IQN for each node is generated by using the system and node
names.
804 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-294 System rename warning
5. Click Yes.
Three site objects are automatically defined by the SVC and numbered 1, 2, and 3. The SVC
creates the corresponding default names, site1, site2, and site3, for each of the site
objects. Site1 and site2 are the two sites that make up the two halves of the stretched
system, and site3 is the quorum disk. You can rename the sites to describe your data center
locations.
Chapter 10. SAN Volume Controller operations using the GUI 805
3. The Rename Sites panel with the site information opens, as shown in Figure 10-296.
4. Enter the appropriate site information. Figure 10-297 shows the updated Rename Sites
panel. Click Rename.
806 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-298 Information panel for a node
3. The panel to rename the node opens, as shown in Figure 10-299. Enter the new name of
the node.
4. Click Rename. If required, repeat these steps for all remaining nodes.
Important: Starting with the 2145-DH8 nodes, we no longer have uninterruptible power
supplies. The batteries are now included in the system.
If you remove the main power while the system is still running, the uninterruptible power
supply units or internal batteries detect the loss of power and instruct the nodes to shut down.
This shutdown can take several minutes to complete. Although the uninterruptible power
supply units or internal batteries have sufficient power to perform the shutdown, you
unnecessarily drain a unit’s batteries.
When power is restored, the SVC nodes start. However, one of the first checks that is
performed by the SVC node is to ensure that the batteries have sufficient power to survive
another power failure, which enables the node to perform a clean shutdown.
Chapter 10. SAN Volume Controller operations using the GUI 807
(You do not want the batteries to run out of power when the node’s shutdown activities did not
complete). If the batteries are not charged sufficiently, the node does not start.
It can take up to 3 hours to charge the batteries sufficiently for a node to start.
Important: When a node shuts down because of a power loss, the node dumps the cache
to an internal hard disk drive so that the cached data can be retrieved when the system
starts. With 2145-8F2/8G4 nodes, the cache is 8 GiB. With 2145-CF8/CG8 nodes, the
cache is 24 GiB. With 2145-DH8 nodes, the cache is up to 64 GiB. Therefore, this process
can take several minutes to dump to the internal drive.
The SVC uninterruptible power supply units or internal batteries are designed to survive at
least two power failures in a short time. After that time, the nodes do not start until the
batteries have sufficient power to survive another immediate power failure.
During maintenance activities, if the uninterruptible power supply units or batteries detect
power and then detect a loss of power multiple times (the nodes start and shut down more
than once in a short time), you might discover that you unknowingly drained the batteries. You
must wait until they are charged sufficiently before the nodes start.
Important: Before a system is shut down, quiesce all I/O operations that are directed to
this system because you lose access to all of the volumes that are serviced by this
clustered system. Failure to quiesce all I/O operations might result in failed I/O operations
that are reported to your host operating systems.
You do not need to quiesce all I/O operations if you are shutting down only one SVC node.
Begin the process of quiescing all I/O activity to the system by stopping the applications on
your hosts that are using the volumes that are provided by the system.
If you are unsure which hosts are using the volumes that are provided by the SVC system,
follow the procedure that is described in 9.6.21, “Showing the host to which the volume is
mapped” on page 540, and repeat this procedure for all volumes.
From the System status panel, complete the following steps to shut down your system:
1. Click Actions, as shown in Figure 10-300 on page 809. Select Power Off.
808 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-300 Action panel to power off the system
3. Enter the code in the window to confirm the system shutdown (Figure 10-302 on
page 810). Ensure that you stopped all FlashCopy mappings, remote copy relationships,
data migration operations, and forced deletions before you continue.
Chapter 10. SAN Volume Controller operations using the GUI 809
Figure 10-302 Shutting down the system confirmation window
You completed the required tasks to shut down the system. You can now shut down the
uninterruptible power supply units by pressing the power buttons on their front panels. The
internal batteries of the 2145-DH8 nodes will shut down automatically with the nodes.
3. A confirmation window opens to ensure that you want to power off the control node, as
shown in Figure 10-304 on page 811.
810 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-304 Confirmation window to power down a node
4. Click Yes.
Tip: When you shut down the system, it does not automatically start. You must manually
start the SVC nodes. If the system shuts down because the uninterruptible power supply
units or batteries detected a loss of power, it automatically restarts when the uninterruptible
power supply units or batteries detect that the power was restored (and the batteries have
sufficient power to survive another immediate power failure).
Restarting the SVC system: To start the SVC system, you must first start the
uninterruptible power supply units by pressing the power buttons on their front panels. No
action is necessary for 2145-DH8 nodes. After they are on, go to the service panel of one
of the nodes within your SVC clustered system and press the power-on button and release
it quickly. After the node is fully booted (for example, displaying Cluster: on line 1 and the
system name on line 2 of the SVC front panel), you can start the other nodes in the same
way. When all of the nodes are fully booted and you reestablish administrative contact by
using the GUI, your system is fully operational again.
Chapter 10. SAN Volume Controller operations using the GUI 811
Figure 10-305 Action tab to update system software
3. Follow the process that is described in 10.18.2, “SAN Volume Controller upgrade test
utility” on page 859.
812 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-306 Update drive software
To update the internal drives, select Pools → Internal Storage in the management GUI.
To update specific drives, select the drive or drives and select Actions → Update. Click
Browse and select the directory where you downloaded the firmware update file. Click
Upgrade. Depending on the number of drives and the size of the system, drive updates
can take up to 10 hours to complete.
6. To monitor the progress of the update, click the Running Tasks icon on the bottom center
of the management GUI window and then click Drive Update Operations. You can also
use the Monitoring → Events panel to view any completion or error messages that relate
to the update.
Chapter 10. SAN Volume Controller operations using the GUI 813
10.13 Managing I/O Groups
In SVC terminology, the I/O Group is represented by a pair of SVC nodes combined in a
clustered system. Nodes in each I/O Group must consist of similar hardware; however,
different I/O Groups of a system can be built from different devices, which is illustrated in
Figure 10-308.
In our lab environment, io_grp0 is built from the 2145-DH8 nodes and io_grp1 consists of a
previous model 2145-CF8. This configuration is typical when you are upgrading your data
center storage virtualization infrastructure to a newer SVC platform.
To see the I/O Group details, move the mouse pointer over Actions and click Properties. The
Properties are shown in Figure 10-309. Alternatively, hover the mouse pointer over the I/O
Group name and right-click to open a menu and navigate to Properties.
814 Implementing the IBM System Storage SAN Volume Controller V7.4
I/O Groups
Topology
Control enclosures
Expansion enclosures
Internal capacity
3. The following tasks are available for this node (Figure 10-311).
Chapter 10. SAN Volume Controller operations using the GUI 815
The following tasks are shown:
– Rename
– Modify Site
– Identify
– Power Off
– Remove
– View → Fibre Channel Ports
– Show Dependent Volumes
– Properties
4. To view node hardware properties, move the mouse over the hardware parts of the node
(Figure 10-313). You must “turn” or rotate the machine in the GUI by clicking the Rotate
arrow with the mouse, as shown in Figure 10-312.
5. The System window (Figure 10-313) shows how to obtain additional information about
certain hardware parts.
816 Implementing the IBM System Storage SAN Volume Controller V7.4
6. Right-click the FC adapter to open the Properties view (Figure 10-314).
Chapter 10. SAN Volume Controller operations using the GUI 817
Figure 10-316 Available I/O Group on the System panel
2. Right-click the empty gray frame and the Action panel for the system opens, as shown in
Figure 10-317. Click Add Nodes.
3. Also, you can use the left mouse key to click directly into the gray frame of the I/O Group to
display the Click to Add option (Figure 10-318 on page 819).
818 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-318 Left-click the gray I/O Group to display the Click to Add option
4. In the Add Nodes window (Figure 10-319), you see the two available nodes, which are in
candidate mode and able to join the cluster.
5. Click Add. The system adds one node after the other. You can check this action if you
hover with the mouse pointer over the new I/O Group. See Figure 10-320.
Chapter 10. SAN Volume Controller operations using the GUI 819
Important: When a node is added to a system, it displays a state of “Adding” and a yellow
warning triangle with an exclamation point. The process to add a node to the system can
take up to 30 minutes, particularly if the software version of the node changes. The added
nodes are updated to the code version of the running cluster.
10.14.4 Removing a node from the SAN Volume Controller clustered system
From the System panel, complete the following steps to remove a node:
1. Select a node and right-click it, as shown in Figure 10-321. Select Remove.
Figure 10-321 Remove a node from the SVC clustered system action
820 Implementing the IBM System Storage SAN Volume Controller V7.4
By default, the cache is flushed before the node is deleted to prevent data loss if a failure
occurs on the other node in the I/O Group.
In certain circumstances, such as when the system is degraded, you can take the
specified node offline immediately without flushing the cache or ensuring that data loss
does not occur. Select Bypass check for volumes that will go offline, and remove the
node immediately without flushing its cache.
3. Click Yes to confirm the removal of the node. See the System Details panel to verify a
node removal, as shown in Figure 10-323.
Figure 10-323 System Details panel with one SVC node removed
If this node is the last node in the system, the warning message differs, as shown in
Figure 10-325 on page 822. Before you delete the last node in the system, ensure that you
want to destroy the system. The user interface and any open CLI sessions are lost.
Chapter 10. SAN Volume Controller operations using the GUI 821
Figure 10-325 Warning window for the removal of the last node in the cluster
After you click OK, the node is a candidate to be added back into this system or into
another system.
10.15 Troubleshooting
The events that are detected by the system are saved in a system event log. When an entry is
made in this event log, the condition is analyzed and classified to help you diagnose
problems.
To access this panel from the SVC System panel, move a mouse pointer over Monitoring in
the dynamic menu and select Events.
822 Implementing the IBM System Storage SAN Volume Controller V7.4
The list of system events opens with the highest-priority event indicated and information
about how long ago the event occurred. Click Close to return to the Recommended Actions
panel.
Note: If an event is reported, you must select the event and run a fix procedure.
Tip: You can also click Run Fix at the top of the panel (Figure 10-327 on page 823) to
solve the most critical event.
Tip: You can also access the Run Fix Procedure action by right-clicking an event.
3. The Directed Maintenance Procedure window opens, as shown in Figure 10-328. Follow
the steps in the wizard to fix the event.
Sequence of steps: We do not describe all of the possible steps here because the
steps that are involved depend on the specific event. The process is always interactive
and you are guided through the entire process.
Chapter 10. SAN Volume Controller operations using the GUI 823
10.15.2 Event log
In the Events panel (Figure 10-329), you can choose to display the SVC event log by
Recommended Actions, Unfixed Messages and Alerts, or Show All events.
To access this panel from the SVC System panel that is shown in Figure 10-1 on page 656,
move the mouse pointer over the Monitoring selection in the dynamic menu and click Events.
Then, in the upper-left corner of the panel, select Recommended actions, Unfixed messages
and alerts, or Show all.
Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem.
Other alerts also require an action, but they do not have a fix procedure. Messages are fixed
when you acknowledge reading them.
Filtering events
You can filter events in various ways. Filtering can be based on event status, as described in
“Basic filtering”, or over a period, as described in “Time filtering” on page 825. You can also
search the event log for a specific text string by using table filtering, as described in “Overview
window” on page 661.
Certain events require a specific number of occurrences in 25 hours before they are displayed
as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired.
Monitoring events are beneath the coalesce threshold and are transient.
You can also sort events by time or error code. When you sort by error code, the most serious
events (those events with the lowest numbers) are displayed first.
Basic filtering
You can filter the Event log display in one of the following ways by using the drop-down menu
in the upper-left corner of the panel (Figure 10-330 on page 825):
Display all unfixed alerts and messages: Select Recommended Actions to show all events
that require your attention.
Display all alerts and messages: Select Unfixed Messages and Alerts.
Display all event alerts, messages, monitoring, and expired events: Select Show All, which
includes the events that are under the threshold.
824 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-330 Filter Event Log display
Time filtering
You can use the following methods to perform time filtering:
Select a start date and time, and an end date and time frame filter. Complete the following
steps to use this method:
a. Click Actions → Filter by Date, as shown in Figure 10-331.
Tip: You can also access the Filter by Date action by right-clicking an event.
b. The Date/Time Filter window opens, as shown in Figure 10-332. From this window,
select a start date and time and an end date and time.
c. Click Filter and Close. Your panel is now filtered based on the time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 10-333 on page 826.
Chapter 10. SAN Volume Controller operations using the GUI 825
Figure 10-333 Reset Date Filter action
Select an event and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an event.
b. Click Actions → Show entries within. Select minutes, hours, or days, and select a
value, as shown in Figure 10-334.
Figure 10-334 Show entries within a certain amount of time after this event
Tip: You can also access the Show entries within action by right-clicking an event.
c. Now, your window is filtered based on the time frame, as shown in Figure 10-335.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 10-336 on page 827.
826 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-336 Reset Date Filter action
Tip: To select multiple events, hold down Ctrl and click the entries that you want to
select.
Tip: You can also access the Mark as Fixed action by right-clicking an event.
3. The Warning window that is shown in Figure 10-338 opens. Click Yes.
Chapter 10. SAN Volume Controller operations using the GUI 827
Exporting event log entries
You can export event log entries to a comma-separated values (CSV) file for further
processing and enhanced filtering with external applications. You can export a full event log or
a filtered result that is based on your requirements. To export an event log entry, complete the
following steps:
1. From the Events panel, show and sort or filter the table to provide the results that you want
to export into a CSV file.
2. Click the diskette icon () and save the file to your workstation, as shown in
Figure 10-339.
3. You can view the file by using Notepad or another program, as shown in Figure 10-340.
828 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-341 Clear log
2. The Warning window that is shown in Figure 10-342 opens. From this window, you must
confirm that you want to clear all entries from the error log.
3. Click Yes.
Chapter 10. SAN Volume Controller operations using the GUI 829
Figure 10-343 Support panel
The Download Support Package window opens, as shown in Figure 10-345 on page 831.
830 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-345 Download Support Package window
The duration varies: Depending on your choice, this action can take several minutes
to complete.
From this window, select the following types of logs that you want to download:
– Standard logs
These logs contain the most recent logs that were collected for the system. These logs
are the most commonly used by support to diagnose and solve problems.
– Standard logs plus one existing statesave
These logs contain the standard logs for the system and the most recent statesaves
from any of the nodes in the system. Statesaves are also known as dumps or
livedumps.
– Standard logs plus most recent statesave from each node
These logs contain the standard logs for the system and the most recent statesaves
from each node in the system. Statesaves are also known as dumps or livedumps.
– Standard logs plus new statesaves
These logs generate new statesaves (livedumps) for all the nodes in the system and
package the statesaves with the most recent logs.
2. Click Download, as shown in Figure 10-345.
3. Select where you want to save the logs, as shown in Figure 10-346 on page 832.
Chapter 10. SAN Volume Controller operations using the GUI 831
Figure 10-346 Save the log file on your personal workstation
2. In the detailed view, select the node from which you want to download the logs by using
the drop-down menu that is in the upper-left corner of the panel, as shown in
Figure 10-348.
3. Select the package or packages that you want to download, as shown in Figure 10-349 on
page 833.
832 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-349 Selecting individual packages
Tip: To select multiple packages, hold down Ctrl and click the entries that you want to
include.
Tip: You can also delete packages by clicking Delete in the Actions menu.
Maximum logging level: The maximum logging level can have a significant effect on the
performance of the CIMOM interface.
Chapter 10. SAN Volume Controller operations using the GUI 833
To change the CIMOM logging level to high, medium, or low, use the drop-down menu in the
upper-right corner of the panel, as shown in Figure 10-351.
Each user account has a name, role, and password assigned to it, which differs from the
Secure Shell (SSH) key-based role approach that is used by the CLI. Starting with version
6.3, you can access the CLI with a password and no SSH key.
Note: Use the default Superuser account only for initial configuration and emergency
access. Change its default passw0rd. Always define individual accounts for the users.
The role-based security feature organizes the SVC administrative functions into groups,
which are known as roles, so that permissions to run the various functions can be granted
differently to the separate administrative users. Table 10-1 on page 835 lists the four major
roles and one special role.
834 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 10-1 Authority roles
Role Allowed commands User
Copy Operator All svcinfo commands and the For users that control all copy
following svctask commands: functionality of the cluster
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp,
prestartfcmap, startfcmap,
stopfcmap, chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership
Service All svcinfo commands and the For users that perform service
following svctask commands: maintenance and other
applysoftware, setlocale, hardware tasks on the cluster
addnode, rmnode, cherrstate,
writesernum, detectmdisk,
includemdisk, clearerrlog,
cleardumps, settimezone,
stopcluster, startstats,
stopstats, and settime
Monitor All svcinfo commands and the For users that need view
following svctask commands: access only
finderr, dumperrlog,
dumpinternallog,
chcurrentuser, and the
svcconfig command: backup
The superuser user is a built-in account that has the Security Admin user role permissions.
You cannot change permissions or delete this superuser account; you can only change the
password. You can also change this password manually on the front panels of the clustered
system nodes.
An audit log tracks actions that are issued through the management GUI or CLI. For more
information, see 10.16.9, “Audit log information” on page 845.
Chapter 10. SAN Volume Controller operations using the GUI 835
10.16.1 Creating a user
Complete the following steps to create a user:
1. From the SVC System panel, move the mouse pointer over the Access selection in the
dynamic menu and click Users.
2. On the Users panel, click Create User, as shown in Figure 10-353.
836 Implementing the IBM System Storage SAN Volume Controller V7.4
User name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The user name can be 1 - 256 characters.
The following types of authentication are available in the Authentication Mode section:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 10-1 on page 835) to which you want this user to belong.
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported Lightweight Directory
Access Protocol (LDAP) service. Ensure that the remote authentication service is
supported by the SVC clustered system. For more information about remote user
authentication, see 2.9, “User authentication” on page 49.
The following types of local credentials can be configured in the Local Credentials section,
depending on your needs:
– Password authentication
The password authenticates users to the management GUI. Enter the password in the
Password field. Verify the password.
Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.
Tip: You can also change user properties by right-clicking a user and selecting
Properties from the list.
Chapter 10. SAN Volume Controller operations using the GUI 837
Figure 10-355 User Properties action
5. From the User Properties window, you can change the authentication mode and the local
credentials. For the authentication mode, choose the following type of authentication:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 10-1 on page 835) of which you want the user to be part.
838 Implementing the IBM System Storage SAN Volume Controller V7.4
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported LDAP service. Ensure that
the remote authentication service is supported by the SVC clustered system.
For the local credentials, the following types of local credentials can be configured in this
section, depending on your needs:
– Password authentication: The password authenticates users to the management GUI.
You must enter the password in the Password field. Verify the password.
Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.
– SSH public/private key authentication: The SSH key authenticates users to the CLI.
Use Browse to locate and upload the SSH public key.
6. To confirm the changes, click OK (Figure 10-356 on page 838).
Important: To remove the password for a specific user, the SSH public key must be
defined. Otherwise, this action is not available.
Tip: You can also remove the password by right-clicking a user and selecting Remove
Password.
4. The Warning window that is shown in Figure 10-358 on page 840 opens. Click Yes.
Chapter 10. SAN Volume Controller operations using the GUI 839
Figure 10-358 Warning window
Important: To remove the SSH public key for a specific user, the password must be
defined. Otherwise, this action is not available.
Tip: You can also remove the SSH public key by right-clicking a user and selecting
Remove SSH Key.
4. The Warning window that is shown in Figure 10-360 opens. Click Yes.
840 Implementing the IBM System Storage SAN Volume Controller V7.4
10.16.5 Deleting a user
Complete the following steps to delete a user:
1. From the SVC System panel, move the mouse pointer over the Access selection, and then
click Users.
2. Select the user.
Important: To select multiple users to delete, hold down Ctrl and click the entries that
you want to delete.
Tip: You can also delete a user by right-clicking the user and selecting Delete.
Chapter 10. SAN Volume Controller operations using the GUI 841
Figure 10-362 Create User Group
Group name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The group name can 1 - 63 characters.
842 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-364 Verify user group creation
4. The User Group Properties window opens (Figure 10-366 on page 844).
Chapter 10. SAN Volume Controller operations using the GUI 843
Figure 10-366 User Group Properties window
From this window, you can change the role. You must select a role among Monitor, Copy
Operator, Service, Administrator, or Security Administrator. For more information about
these roles, see Table 10-1 on page 835.
5. To confirm the changes, click OK, as shown in Figure 10-366.
844 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-367 Delete User Group action
– If you have users in this group, the Delete User Group window opens, as shown in
Figure 10-369. The users of this group are moved to the Monitor user group.
Chapter 10. SAN Volume Controller operations using the GUI 845
To view the audit log, from the SVC System panel, move the pointer over the Access selection
on the dynamic menu and click Audit Log, as shown in Figure 10-370.
Time filtering
The following methods are available to perform time filtering on the audit log:
Select a start date and time and an end date and time.
To use this time frame filter, complete the following steps:
a. Click Actions → Filter by Date, as shown in Figure 10-371.
Tip: You can also access the Filter by Date action by right-clicking an entry.
846 Implementing the IBM System Storage SAN Volume Controller V7.4
b. The Date/Time Filter window opens (Figure 10-372). From this window, select a start
date and time and an end date and time.
c. Click Filter and Close. Your audit log panel is now filtered based on its time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 10-373.
Select an entry and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an entry.
b. Click Actions → Show entries within. Select minutes, hours, or days. Then, select a
value, as shown in Figure 10-374.
Tip: You can also access the Show entries within action by right-clicking an entry.
Chapter 10. SAN Volume Controller operations using the GUI 847
10.17 Configuration
In this section, we describe how to configure various properties of the SVC system.
Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.
To change the system name, click the system name and specify the new name.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.
848 Implementing the IBM System Storage SAN Volume Controller V7.4
Each node has a unique iSCSI name that is associated with two IP addresses. After the
host starts the iSCSI connection to a target node, this IQN from the target node is visible in
the iSCSI configuration tool on the host.
iSNS and CHAP
You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems
use the iSNS server to manage iSCSI targets and for iSCSI discovery.
You can also enable Challenge Handshake Authentication Protocol (CHAP) to
authenticate the system and iSCSI-attached hosts with the specified shared secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system/host definition. You can create an iSCSI host definition without
the use of a CHAP.
Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.
Chapter 10. SAN Volume Controller operations using the GUI 849
10.17.5 Email notifications
The Call Home feature transmits operational and event-related data to you and IBM through a
Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification
email. When configured, this function alerts IBM service personnel about hardware failures
and potentially serious configuration or environmental issues.
4. A wizard opens, as shown in Figure 10-378. In the Email Event Notifications System
Location window, you must first define the system location information (Company name,
Street address, City, State or province, Postal code, and Country or region). Click Next
after you provide this information.
5. In the Contact Details window, you must enter contact information to enable IBM Support
personnel to contact the person in your organization to assist with problem resolution
(Contact name, Email address, Telephone (primary), Telephone (alternate), and Machine
location). Ensure that all contact information is valid and click Next, as shown in
Figure 10-379 on page 851.
850 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-379 Define the company contact information
6. In the Email Event Notifications Email Servers window (Figure 10-380), configure at least
one email server that is used by your site. Enter a valid IP address and a server port for
each server that is added. Ensure that the email servers are valid. Use Ping to verify the
accessibility to your email server.
7. As shown in Figure 10-381 on page 852, you can configure email addresses to receive
notifications. We suggest that you configure an email address that belongs to a support
user with the error event notification type that is enabled to notify IBM service personnel if
an error condition occurs on your system. Ensure that all email addresses are valid.
Optionally, enable inventory reporting. To enable inventory reporting, click the rightmost
icon that is shown in Figure 10-381 on page 852. You see Reporting when you hover over
this icon with your mouse.
Chapter 10. SAN Volume Controller operations using the GUI 851
Figure 10-381 Enable event types
8. The last window displays a summary of your Email Event Notifications wizard. Click
Finish to complete the setup. The wizard is now closed. More information was added to
the panel, as shown on Figure 10-382. You can edit or disable email notification from this
window.
You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (Figure 10-383 on page 853):
IP Address
The address for the SNMP server.
Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535.
852 Implementing the IBM System Storage SAN Volume Controller V7.4
Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event.
You can configure a syslog server to receive log messages from various systems and store
them in a central repository by entering the following information (Figure 10-384 on
page 854):
IP Address
The IP address for the syslog server.
Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
Chapter 10. SAN Volume Controller operations using the GUI 853
Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
The syslog messages can be sent in concise message format or expanded message format.
854 Implementing the IBM System Storage SAN Volume Controller V7.4
#NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0
(build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234
#AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000#Additional
Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000
Chapter 10. SAN Volume Controller operations using the GUI 855
Figure 10-386 Set Date and Time window
• If you are using a Network Time Protocol (NTP) server, select Set NTP Server IP
Address and then enter the IP address of the NTP server, as shown in
Figure 10-387.
4. Click Save.
10.17.9 Licensing
Complete the following steps to configure the licensing settings:
1. From the SVC Settings panel, move the pointer over Settings and click System.
2. In the left column, select Licensing, as shown in Figure 10-388.
856 Implementing the IBM System Storage SAN Volume Controller V7.4
3. In the Select Your License section, you can choose between the following licensing
options for your SVC system:
– Standard Edition: Select the number of terabytes that are available for your license for
virtualization and for Copy Services functions for this license option.
– Entry Edition: This type of licensing is based on the number of the physical disks that
you are virtualizing and whether you selected to license the FlashCopy function, the
Metro Mirror and Global Mirror function, or both.
4. Set the licensing options for the SVC for the following elements:
– Virtualization Limit
Enter the capacity of the storage that will be virtualized by this system.
– FlashCopy Limit
Enter the capacity that is available for FlashCopy mappings.
Important: The Used capacity for FlashCopy mapping is the sum of all of the
volumes that are the source volumes of a FlashCopy mapping.
Important: The Used capacity for Global Mirror and Metro Mirror is the sum of the
capacities of all of the volumes that are in a Metro Mirror or Global Mirror
relationship; both master volumes and auxiliary volumes are included.
Chapter 10. SAN Volume Controller operations using the GUI 857
Figure 10-389 GUI Preferences window
Figure 10-390 Change the IBM SAN Volume Controller Knowledge Center URL
– Extent Sizes enable you to select the extent size during storage pool creation.
– The Accessibility option enables low graphic mode when the system is connected
through a slower network.
858 Implementing the IBM System Storage SAN Volume Controller V7.4
Note: Later, we show the process when you upgrade from 7.4.0.x to 7.4.0.y.
The format for the software upgrade package name ends in four positive integers that are
separated by dots. For example, a software upgrade package might have the name that is
shown in the following example:
IBM_2145_INSTALL_7.4.0.0
Important: Before you attempt any SVC code update, read and understand the SVC
concurrent compatibility and code cross-reference matrix. For more information, see the
following website and click Latest IBM SAN Volume Controller code:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
During the upgrade, each node in your SVC clustered system is automatically shut down and
restarted by the upgrade process. Because each node in an I/O Group provides an
alternative path to volumes, use the Subsystem Device Driver (SDD) to ensure that all I/O
paths between all hosts and SANs work.
If you do not perform this check, certain hosts might lose connectivity to their volumes and
experience I/O errors when SVC node that provides that access is shut down during the
upgrade process. You can check the I/O paths by using SDD datapath query commands.
You can use the svcupgradetest utility to check for known issues that might cause problems
during an SVC software upgrade.
The software upgrade test utility can be downloaded in advance of the upgrade process, or it
can be downloaded and run directly during the software upgrade, as guided by the upgrade
wizard.
You can run the utility multiple times on the same SVC system to perform a readiness
check-in preparation for a software upgrade. We strongly advise that you run this utility for a
final time immediately before you apply the SVC upgrade to ensure that no new releases of
the utility were available since you originally downloaded it.
The installation and use of this utility is nondisruptive and the utility does not require the
restart of any SVC node; therefore, host I/O is not interrupted. The utility is only installed on
the current configuration node.
Chapter 10. SAN Volume Controller operations using the GUI 859
System administrators must continue to check whether the version of code that they plan to
install is the latest version. You can obtain the latest information at this website:
https://ibm.biz/BdE8Pe
This utility is intended to supplement rather than duplicate the existing tests that are
performed by the SVC upgrade procedure (for example, checking for unfixed errors in the
error log).
From the window that is shown in Figure 10-391, you can select the following options:
– Check for updates: Use this option to check, on the IBM website, whether an SVC
software version is available that is newer than the version that you installed on your
SVC. You need an Internet connection to perform this check.
– Launch Upgrade Wizard: Use this option to start the software upgrade process.
3. Click Launch Upgrade Wizard to start the upgrade process. The window that is shown in
Figure 10-392 opens.
860 Implementing the IBM System Storage SAN Volume Controller V7.4
From the Upgrade Package window, download the upgrade test utility from the IBM
website. Click Browse to upload it from the local disk. When the upgrade test utility is
uploaded, the window that is shown in Figure 10-393 opens.
4. When you click Next, the upgrade test utility is installed. The window that is shown in
Figure 10-394 opens. Click Close.
5. The window that is shown in Figure 10-395 on page 862 opens. From this window, you
can run your upgrade test utility for the level that you need. Enter the version to which you
want to upgrade and the upgrade test utility checks the system to ensure that the system
is ready for an upgrade to this version.
Chapter 10. SAN Volume Controller operations using the GUI 861
Figure 10-395 Run upgrade test utility
6. Click Next. The upgrade test utility now runs. You see the suggested actions (if any
actions are needed) or the window that is shown in Figure 10-396.
7. In our case (Figure 10-397), we got one warning (system was not configured to send
emails) and no errors. In the case of an error, you must fix the problem before you can
proceed.
8. Click Next to start the SVC software upload procedure. The Upgrade Package window
that is shown in Figure 10-398 on page 863 opens.
862 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-398 Upgrade Package window
9. From the window that is shown in Figure 10-398, either download the SVC upgrade
package directly from the IBM website, or locate and upload the software upgrade
package from your disk.
10.Click Next, and you see the windows that are shown in Figure 10-399 and Figure 10-400.
11.Click Next and you see the window that is shown in Figure 10-401 on page 864. The
definitions for automatically and manually are explained:
– Updating the system automatically
During the automatic update process, each node in a system is updated one at a time,
and the new code is staged on the nodes. While each node restarts, degradation in the
maximum I/O rate that can be sustained by the system might occur. After all the nodes
in the system are successfully restarted with the new code level, the new level is
automatically committed.
Chapter 10. SAN Volume Controller operations using the GUI 863
During an automatic code update, each node of a working pair is updated sequentially.
The node that is being updated is temporarily unavailable and all I/O operations to that
node fail. As a result, the I/O error counts increase and the failed I/O operations are
directed to the partner node of the working pair. Applications do not see any I/O
failures. When new nodes are added to the system, the update package is
automatically downloaded to the new nodes from the SVC system.
The update can normally be done concurrently with typical user I/O operations.
However, a possibility exists that performance can be affected but this situation
depends on the environment. If any restrictions apply to the operations that can be
done during the update, these restrictions are documented on the product website that
you use to download the update packages. During the update procedure, most of the
configuration commands are not available.
– Updating the system manually
During an automatic update procedure, the system updates each of the nodes
systematically. The automatic method is the preferred procedure for updating the code
on nodes; however, to provide more flexibility in the update process, you can also
update each node manually.
During this manual procedure, you prepare the update, remove a node from the
system, update the code on the node, and return the node to the system. You repeat
this process for the remaining nodes until the last node is removed from the system.
Every node must be updated to the same code level. You cannot interrupt the update
and switch to installing a different level.
After all of the nodes are updated, you must confirm the update to complete the
process. The confirmation restarts each node in order and takes about 30 minutes to
complete.
12.Select Automatic upgrade, which is fully controlled by the system.
13.When you click Finish, the SVC software upgrade starts. The window that is shown in
Figure 10-402 on page 865 opens.
864 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-402 Upgrading a node
14.When you click OK, you complete the process to upgrade the SVC software. Now, you see
the window that is shown in Figure 10-403.
Chapter 10. SAN Volume Controller operations using the GUI 865
After a few minutes, the window that is shown in Figure 10-405 opens, which shows that
the node was upgraded.
Figure 10-406 One node is ready and the second node is updating
15.The new SVC software version is installed on the remaining node in the system. You can
check the upgrade status, as shown in Figure 10-407.
16.Now, you must commit and confirm that the code is correct and that the upgrade was
successful. Click Confirm Update, as shown in Figure 10-408 on page 867.
866 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-408 Confirm the update
After the confirmation process finishes, the SVC software upgrade task is complete.
Chapter 10. SAN Volume Controller operations using the GUI 867
Figure 10-410 Upgrade Software
From the window that is shown in Figure 10-410, you can only select the following option:
– Update: This option opens the firmware selection window.
Important: The actual firmware must already be downloaded. You can obtain the
actual firmware at this website:
https://ibm.biz/BdE8Pe
3. When you click Update, the Update System selection panel opens, as shown in
Figure 10-411.
4. If all selected firmware is valid, you see Figure 10-412 on page 869. The test utility,
package information, code level, and Update option change color. Click Update to
proceed.
868 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-412 Validated firmware package
5. The selection window opens. Choose either to update the system automatically or
manually. The differences are explained:
– Updating the system automatically
During the automatic update process, each node in a system is updated one at a time,
and the new code is staged on the nodes. While each node restarts, degradation in the
maximum I/O rate that can be sustained by the system can occur. After all the nodes in
the system are successfully restarted with the new code level, the new level is
automatically committed.
During an automatic code update, each node of a working pair is updated sequentially.
The node that is being updated is temporarily unavailable and all I/O operations to that
node fail. As a result, the I/O error counts increase and the failed I/O operations are
directed to the partner node of the working pair. Applications do not see any I/O
failures. When new nodes are added to the system, the update package is
automatically downloaded to the new nodes from the SVC system.
The update can normally be done concurrently with typical user I/O operations.
However, performance might be affected. If any restrictions apply to the operations that
can be done during the update, these restrictions are documented on the product
website that you use to download the update packages. During the update procedure,
most configuration commands are not available.
– Updating the system manually
During an automatic update procedure, the system updates each of the nodes
systematically. The automatic method is the preferred procedure for updating the code
on nodes; however, to provide more flexibility in the update process, you can also
update each node manually.
During this manual procedure, you prepare the update, remove a node from the
system, update the code on the node, and return the node to the system. You repeat
this process for the remaining nodes until the last node is removed from the system.
Every node must be updated to the same code level. You cannot interrupt the update
and switch to installing a different level.
Chapter 10. SAN Volume Controller operations using the GUI 869
After all of the nodes are updated, you must confirm the update to complete the
process. The confirmation restarts each node in order and takes about 30 minutes to
complete.
6. In Figure 10-413, we select Automatic update.
7. When you click Finish, the SVC software upgrade starts. The window that is shown in
Figure 10-414 opens. The system starts with the upload of the test utility and the SVC
system firmware.
Figure 10-414 Uploading the test utility and the SVC system firmware
8. After a while, the system starts automatically to run the update test utility.
870 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-415 Running the update test utility
9. When the system detects an issue or an error, you are guided by the GUI. Click Read
more, as shown in Figure 10-416.
Figure 10-416 Issues that are detected by the update test utility
10.The Update Test Utility Results panel opens and describes the results, as shown in
Figure 10-417.
Chapter 10. SAN Volume Controller operations using the GUI 871
11.In our case, we received a warning because we did not enable email notification. So, we
can click Close and proceed with the update. As shown in Figure 10-418, we click
Resume.
12.Due to the warning about our not setting up email notification, another warning appears,
as shown in Figure 10-419. We proceed and click Yes.
14.When the update for the first node is complete, the system paused for approximately 30
minutes to ensure that all paths are reestablished to the now updated node (Figure 10-421
on page 873).
872 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-421 System paused to reestablish the paths
15.After a few minutes, a node failover happens and closes the actual web session. Click Yes
to reestablish the web session, as shown in Figure 10-422.
Chapter 10. SAN Volume Controller operations using the GUI 873
874 Implementing the IBM System Storage SAN Volume Controller V7.4
A
To ensure that the performance levels of your system that you want are maintained, monitor
performance periodically to provide visibility to potential problems that exist or are developing
so that they can be addressed in a timely manner.
Performance considerations
When you are designing an SVC storage infrastructure or maintaining an existing
infrastructure, you must consider many factors in terms of their potential effect on
performance. These factors include, but are not limited to dissimilar workloads competing for
the same resources, overloaded resources, insufficient available resources, poor performing
resources, and similar performance constraints.
Remember the following high-level rules when you are designing your storage area network
(SAN) and SVC layout:
Host-to-SVC inter-switch link (ISL) oversubscription
This area is the most significant I/O load across ISLs. The recommendation is to maintain
a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to lead to I/O
bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on
the edges and the SVC is the core.
Storage-to-SVC ISL oversubscription
This area is the second most significant I/O load across ISLs. The maximum
oversubscription is 7-to-1. A higher ratio is not supported. Again, this suggestion assumes
a multiple-switch SAN fabric design.
Node-to-node ISL oversubscription
This area is the least significant load of the three possible oversubscription bottlenecks. In
standard setups, this load can be ignored. Although this area is not entirely negligible, it
does not contribute significantly to the ISL load. However, node-to-node ISL
oversubscription is mentioned here in relation to the split-cluster capability that was made
available with version 6.3. When the system is running in this manner, the number of ISL
links becomes more important. As with the storage-to-SVC ISL oversubscription, this load
also requires a maximum of 7-to-1 oversubscription. Exercise caution and careful planning
when you determine the number of ISLs to implement. If you need assistance, we
recommend that you contact your IBM representative and request technical assistance.
ISL trunking/port channeling
For the best performance and availability, we highly recommend that you use ISL trunking
or port channeling. Independent ISL links can easily become overloaded and turn into
performance bottlenecks. Bonded or trunked ISLs automatically share load and provide
better redundancy in a failure.
876 Implementing the IBM System Storage SAN Volume Controller V7.4
Number of paths per host multipath device
The maximum supported number of paths per multipath device that is visible on the host is
eight. Although the Subsystem Device Driver Path Control Module (SDDPCM), related
products, and most vendor multipathing software can support more paths, the SVC
expects a maximum of eight paths. In general, you see only an effect on performance from
more paths than eight. Although the SVC can work with more than eight paths, this design
is technically unsupported.
Do not intermix dissimilar array types or sizes
Although the SVC supports an intermix of differing storage within storage pools, it is best
to always use the same array model, RAID mode, RAID size (RAID 5 6+P+S does not mix
well with RAID 6 14+2), and drive speeds.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can provide a validation that design expectations are met and identify opportunities for
improvement.
The performance is near linear when nodes are added into the cluster until performance
eventually becomes limited by the attached components. Also, although virtualization with the
SVC provides significant flexibility in terms of the components that are used, it does not
diminish the necessity of designing the system around the components so that it can deliver
the level of performance that you want.
The key item for planning is your SAN layout. Switch vendors have slightly different planning
requirements, but the end goal is that you always want to maximize the bandwidth that is
available to the SVC ports. The SVC is one of the few devices that can drive ports to their
limits on average, so it is imperative that you put significant thought into planning the SAN
layout.
Essentially, the SVC performance improvements are gained by spreading the workload
across a greater number of back-end resources and more caching that are provided by the
SVC cluster. However, the performance of individual resources eventually becomes the
limiting factor.
The statistics files (VDisk, MDisk, and Node) are saved at the end of the sampling interval
and a maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion.
This design provides statistics for the most recent 80-minute period if the default 5-minute
sampling interval is used. The SVC supports user-defined sampling intervals of 1 - 60
minutes.
The maximum space that is required for a performance statistics file is 1,153,482 bytes. Up to
128 (16 per each of the three types across eight nodes) different files can exist across eight
SVC nodes. This design makes the total space requirement a maximum of 147,645,694 bytes
for all performance statistics from all nodes in an SVC cluster.
Note: Remember this maximum of 147,645,694 bytes for all performance statistics from all
nodes in an SVC cluster when you are in time-critical situations. The required size is not
otherwise important because SVC node hardware can map the space.
You can define the sampling interval by using the startstats -interval 2 command to
collect statistics at 2-minute intervals. For more information, see 9.9.7, “Starting statistics
collection” on page 556.
Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within the SVC, they shorten the amount of time that the historical
data is available on the SVC. For example, instead of an 80-minute period of data with the
default five-minute interval, if you adjust to 2-minute intervals, you have a 32-minute period
instead.
Since SVC 5.1.0, cluster-level statistics are no longer supported. Instead, use the per node
statistics that are collected. The sampling of the internal performance counters is coordinated
across the cluster so that when a sample is taken, all nodes sample their internal counters at
the same time. It is important to collect all files from all nodes for a complete analysis. Tools,
such as Tivoli Storage Productivity Center, perform this intensive data collection for you.
878 Implementing the IBM System Storage SAN Volume Controller V7.4
The node_frontpanel_id is of the node on which the statistics were collected. The date is in
the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an
MDisk statistics file name:
Nm_stats_113986_141031_214932
Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.
Tip: The performance statistics files can be copied from the SVC nodes to a local drive on
your workstation by using the pscp.exe (included with PuTTY) from an MS-DOS command
line, as shown in this example:
C:\Program Files\PuTTY>pscp -unsafe -load ITSO_SVC3
admin@10.18.229.81:/dumps/iostats/* c:\statsfiles
Use the -load parameter to specify the session that is defined in PuTTY.
qperf
qperf is an unofficial (no-charge and unsupported) collection of awk scripts. qperf was made
available for download from IBM Techdocs. It was written by Christian Karpp. qperf is
designed to provide a quick performance overview by using the command-line interface (CLI)
and a UNIX Korn shell. (It can also be used with Cygwin.)
svcmon
svcmon is not longer available.
The performance statistics files are in .xml format. They can be manipulated by using various
tools and techniques. Figure A-1 on page 880 shows an example of the type of chart that you
can produce by using the SVC performance statistics.
Each node collects various performance statistics, mostly at 5-second intervals, and the
statistics that are available from the config node in a clustered environment. This information
can help you determine the performance effect of a specific node. As with system statistics,
node statistics help you to evaluate whether the node is operating within normal performance
metrics.
880 Implementing the IBM System Storage SAN Volume Controller V7.4
The lsnodestats command provides performance statistics for the nodes that are part of a
clustered system, as shown in Example A-2 (the output is truncated and shows only part of
the available statistics). You can also specify a node name in the command to limit the output
for a specific node.
On the other side, the lssystemstats command lists the same set of statistics that is listed
with the lsnodestats command, but representing all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
Bandwidth: Sum of bandwidth of all nodes
Latency: Average latency for the cluster, which is calculated by using data from the whole
cluster, not an average of the single node values
IOPS: Total IOPS of all nodes
CPU percentage: Average CPU percentage of all nodes
Table A-1 has a brief description of each of the statistics that are presented by the
lssystemstats and lsnodestats commands.
882 Implementing the IBM System Storage SAN Volume Controller V7.4
Field name Unit Description
As shown in Figure A-3 on page 885, the Performance monitoring window is divided into the
following sections that provide utilization views for the following resources:
CPU Utilization: Shows the overall CPU usage percentage.
Volumes: Shows the overall volume utilization with the following fields:
– Read
– Write
– Read latency
– Write latency
Interfaces: Shows the overall statistics for each of the available interfaces:
– Fibre Channel
– iSCSI
– SAS
– IP Remote Copy
MDisks: Shows the following overall statistics for the MDisks:
– Read
– Write
– Read latency
– Write latency
884 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure A-3 Performance monitoring window
You can also select to view performance statistics for each of the available nodes of the
system, as shown in Figure A-4.
You can also change the metric between MBps or IOPS, as shown in Figure A-5.
On any of these views, you can select any point with your cursor to know the exact value and
when it occurred. When you place your cursor over the timeline, it becomes a dotted line with
the various values gathered, as shown in Figure A-6 on page 886.
For each of the resources, various values exist that you can view by selecting the value. For
example, as shown in Figure A-7, the four available fields are selected for the MDisks view:
Read, Write, Read latency, and Write latency. In our example, Read is not selected.
Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, the use of .xml files is a
less practical and more complicated method to analyze the SVC performance statistics. Tivoli
Storage Productivity Center for Disk is the supported IBM tool to collect and analyze SVC
performance statistics.
Tivoli Storage Productivity Center for Disk is installed separately on a dedicated system and it
is not part of the SVC software bundle.
For more information about the use of Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
886 Implementing the IBM System Storage SAN Volume Controller V7.4
SVC port quality statistics: Tivoli Storage Productivity Center for Disk Version 4.2.1
supports the SVC port quality statistics that are provided in SVC versions 4.3 and later.
Monitoring these metrics and the performance metrics can help you to maintain a stable
SAN environment.
Appendix B. Terminology
In this appendix, we define the IBM System Storage SAN Volume Controller (SVC) terms that
are commonly used in this book.
To see the complete set of terms that relate to the SAN Volume Controller, see the Glossary
section of the IBM SAN Volume Controller Knowledge Center, which is available at this
website:
http://www.ibm.com/support/knowledgecenter/STPVGU/landing/SVC_welcome.html
Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization engine is
outside the data path and performs a metadata-style service. The metadata server contains
all the mapping and locking tables, and the storage devices contain only data. See also
“Symmetric virtualization” on page 900.
Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to the
application as soon as the write operation is made to the source volume. Later, the write
operation is made to the target volume. See also “Synchronous replication” on page 900.
Back end
See “Front end and back end” on page 894.
Call home
Call home is a communication link that is established between a product and a service
provider. The product can use this link to call IBM or another service provider when the
product requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.
Canister
A canister is a single processing unit within a storage system.
Capacity licensing
Capacity licensing is a licensing model that licenses features with a price-per-terabyte model.
Licensed features are FlashCopy, Metro Mirror, Global Mirror, and virtualization. See also
“FlashCopy” on page 893, “Metro Mirror” on page 896, and “Virtualization” on page 900.
Channel extender
A channel extender is a device that is used for long-distance communication that connects
other storage area network (SAN) fabric components. Generally, channel extenders can
involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or
another long-distance communication protocol.
Child pool
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes. Instead of being created directly from managed disks (MDisks), child pools
890 Implementing the IBM System Storage SAN Volume Controller V7.4
are created from existing capacity that is allocated to a parent pool. As with parent pools,
volumes can be created that specifically use the capacity that is allocated to the child pool.
Child pools are similar to parent pools with similar properties. Child pools can be used for
volume copy operation. Also, see “Parent pool” on page 897.
Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if it is moved
from a hard disk drive (HDD) to a Flash disk. A cold extent also refers to an extent that needs
to be migrated onto an HDD if it is on a Flash disk drive.
Compression
Compression is a function that removes repetitive characters, spaces, strings of characters,
or binary data from the data that is being processed and replaces characters with control
characters. Compression reduces the amount of storage space that is required for data.
Compression accelerator
A compression accelerator is hardware onto which the work of compression is offloaded from
the microprocessor.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the data that describes the
clustered-system configuration and provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.
Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets
that are maintained with the same time reference so that all copies are consistent in time. A
Consistency Group can be managed as a single entity.
Container
A container is a software object that holds or organizes other software objects or entities.
Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete and
the target disk has no further dependency on the source disk. The time of the last trigger
event is normally displayed with this status.
Data consistency
Data consistency is a characteristic of the data at the target site where the dependent write
order is maintained to guarantee the recoverability of applications.
Data migration
Data migration is the movement of data from one physical location to another physical
location without the disruption of application I/O operations.
Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the SVC cluster likely have
different performance attributes because of the type of disk or RAID array on which they are
installed. The MDisks can be on 15 K RPM Fibre Channel (FC) or serial-attached SCSI (SAS)
disk, Nearline SAS, or Serial Advanced Technology Attachment (SATA), or even Flash Disks.
Therefore, a storage tier attribute is assigned to each MDisk and the default is generic_hdd.
SVC 6.1 introduced a new disk tier attribute for Flash Disk, which is known as generic_ssd.
Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data
placement of a volume’s extents in a multitiered storage pool. The pool normally contains a
mix of Flash Disks and HDDs. Easy Tier measures host I/O activity on the volume’s extents
and migrates hot extents onto the Flash Disks to ensure the maximum performance.
Evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the
volume extents in a pool are “measured” only. No automatic extent migration is performed.
892 Implementing the IBM System Storage SAN Volume Controller V7.4
Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process.
Before SVC V6.1, this situation was known as an error.
Event code
An event code is a value that is used to identify an event condition to a user. This value might
map to one or more event IDs or to values that are presented on the service panel. This value
is used to report error conditions to IBM and to provide an entry point into the service guide.
Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
SVC. An event ID is used internally in the cluster to identify the error.
Excluded condition
The excluded condition is a status condition. It describes an MDisk that the SVC decided is
no longer sufficiently reliable to be managed by the cluster. The user must issue a command
to include the MDisk in the cluster-managed storage.
Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range 16 MB - 8 GB in size.
External storage
External storage refers to managed disks (MDisks) that are SCSI logical units that are
presented by storage systems that are attached to and managed by the clustered system.
Failback
Failback is the restoration of an appliance to its initial configuration after the detection and
repair of a failed network or component.
Failover
Failover is an automatic operation that switches to a redundant or standby system or node in
a software, hardware, or network interruption. See also “Failback”.
Field-replaceable units
Field-replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the unit’s components fails. They are held as spares by the IBM service organization.
FlashCopy
FlashCopy refers to a point-in-time copy where a virtual copy of a volume is created. The
target volume maintains the contents of the volume at the point in time when the copy was
established. Any subsequent write operations to the source volume are not reflected on the
target volume.
FlashCopy relationship
See “FlashCopy mapping” on page 894.
FlashCopy service
FlashCopy service is a copy service that duplicates the contents of a source volume on a
target volume. In the process, the original contents of the target volume are lost. See also
“Point-in-time copy” on page 897.
Global Mirror
Global Mirror is a method of asynchronous replication that maintains data consistency across
multiple volumes within or across multiple systems. Global Mirror is generally used where
distances between the source site and target site cause increased latency beyond what the
application can accept.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KiB or
256 KiB) in the SVC. A grain is also the unit to extend the real size of a thin-provisioned
volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).
Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or Internet Small
Computer System Interface (iSCSI) host names for LUN mapping. For each host ID, SCSI IDs
are mapped to volumes separately. The intent is to have a one-to-one relationship between
hosts and host IDs, although this relationship cannot be policed.
Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster. (Host mapping is equivalent to LUN masking.) Before SVC V6.1, this
process was known as VDisk-to-host mapping.
894 Implementing the IBM System Storage SAN Volume Controller V7.4
Hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if it is
moved from an HDD onto a Flash Disk.
Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents in the
storage pool (existing LUN or (image mode) MDisk) with the extents in the volume.
Image volume
An image volume is a volume in which a direct block-for-block translation exists from the
managed disk (MDisk) to the volume.
I/O Group
Each pair of SVC cluster nodes is known as an input/output (I/O) Group. An I/O Group has a
set of volumes that are associated with it that are presented to host systems. Each SVC node
is associated with exactly one I/O Group. The nodes in an I/O Group provide a failover and
failback function for each other.
Internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in
enclosures and in nodes that are part of the SVC cluster.
Local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect
the components (nodes, hosts, and switches) of the local cluster together.
Metro Mirror
Metro Mirror is a method of synchronous replication that maintains data consistency across
multiple volumes within the system. Metro Mirror is generally used when the write latency that
is caused by the distance between the source site and target site is acceptable to application
performance.
Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the SVC as copy 0 and the secondary copy is known within the
SVC as copy 1.
Node
An SVC node is a hardware entity that provides virtualization, cache, and copy services for
the cluster. The SVC nodes are deployed in pairs that are called I/O Groups. One node in a
clustered system is designated as the configuration node.
Node canister
A node canister is a hardware unit that includes the node hardware, fabric and service
interfaces, and serial-attached SCSI (SAS) expansion ports.
896 Implementing the IBM System Storage SAN Volume Controller V7.4
Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections
to the traffic on the most heavily loaded ISLs, where more than one connection is used
between these switches. Oversubscription assumes a symmetrical network, and a specific
workload that is applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.
Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of
the same size. Volumes are created from the extents that are available in the pool. You can
add MDisks to a pool at any time either to increase the number of extents that are available
for new volume copies or to expand existing volume copies. The system automatically
balances volume extents between the MDisks to provide the best performance to the
volumes.
Point-in-time copy
A point-in-time copy is the instantaneous copy that the FlashCopy service makes of the
source volume. See also “FlashCopy service” on page 894.
Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volume’s data from cache in preparation for the FlashCopy
operation.
Private fabric
Configure one SAN per fabric so that it is dedicated for node-to-node communication. This
SAN is referred to as a private SAN.
Public fabric
Configure one SAN per fabric so that it is dedicated for host attachment, storage system
attachment, and remote copy operations. This SAN is referred to as a public SAN. You can
configure the public SAN to allow SVC node-to-node communication also. You can optionally
use the -localportfcmask parameter of the chsystem command to constrain the node-to-node
communication to use only the private SAN.
Quorum disk
A disk that contains a reserved area that is used exclusively for system management. The
quorum disk is accessed when it is necessary to determine which half of the clustered system
continues to read and write data. Quorum disks can either be MDisks or drives.
Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a tie. Nodes
attempt to lock the first quorum disk (index 0), followed by the next disk (index 1), and finally
the last disk (index 2). The tie is broken by the node that locks them first.
RACE engine
The RACE engine compresses data on volumes in real time with minimal impact on
performance. See “Compression” on page 891 or “Real-time Compression” on page 898.
Real-time Compression
Real-time Compression is an IBM integrated software function for storage space efficiency.
The RACE engine compresses data on volumes in real time with minimal impact on
performance.
RAID 0
RAID 0 is a data striping technique that is used across an array and no data protection is
provided.
RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.
RAID 10
RAID 10 is a combination of a RAID 0 stripe that is mirrored (RAID 1). Therefore, two identical
copies of striped data exist; no parity exists.
RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.
RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. Therefore, this level can continue to process read and write requests to
all of the array’s virtual disks in the presence of two concurrent disk failures.
Relationship
In Metro Mirror or Global Mirror, a relationship is the association between a master volume
and an auxiliary volume. These volumes also have the attributes of a primary or secondary
volume.
898 Implementing the IBM System Storage SAN Volume Controller V7.4
Reliability is the degree to which the hardware remains free of faults. Availability is the ability
of the system to continue operating despite predicted or experienced faults. Serviceability is
how efficiently and nondisruptively broken hardware can be fixed.
Remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that
connect the components (nodes, hosts, and switches) of the remote cluster together.
Significant distances can exist between the components in the local cluster and those
components in the remote cluster.
Snapshot
A snapshot is an image backup type that consists of a point-in-time view of a volume.
Space efficient
See “Thin provisioning” on page 900.
Stretched system
A stretched system is an extended high availability (HA) method that is supported by SVC to
enable I/O operations to continue after the loss of half of the system. A stretched system is
also sometimes referred to as a split system. One half of the system and I/O Group is usually
in a geographically distant location from the other, often 10 kilometers (6.2 miles) or more. A
third site is required to host a storage system that provides a quorum disk.
Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical storage, in the form
of a Redundant Array of Independent Disks (RAID), is split into smaller chunks of storage
known as extents. These extents are then concatenated, by using various policies, to make
volumes. See also “Asymmetric virtualization” on page 890.
Synchronous replication
Synchronous replication is a type of replication in which the application write operation is
made to both the source volume and target volume before control is given back to the
application. See also “Asynchronous replication” on page 890.
Thin-provisioned volume
A thin-provisioned volume is a volume that allocates storage when data is written to it.
Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a “logical” capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Therefore, a thin-provisioned volume is a volume with a virtual capacity that
differs from its real capacity. Before SVC V6.1, this thin-provisioned volume was known as
space efficient.
T10 DIF
T10 DIF is a “Data Integrity Field” extension to SCSI to allow for end-to-end protection of data
from host application to physical media.
Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is created that
contains several storage systems. Storage systems from various vendors can be used. The
pool can be split into volumes that are visible to the host systems that use them. See also
“Capacity licensing” on page 890.
900 Implementing the IBM System Storage SAN Volume Controller V7.4
Virtualized storage
Virtualized storage is physical storage that has virtualization techniques applied to it by a
virtualization engine.
Volume
A volume is an SVC logical device that appears to host systems that are attached to the SAN
as a SCSI disk. Each volume is associated with exactly one I/O Group. A volume has a
preferred node within the I/O Group. Before SVC 6.1, this volume was known as a VDisk or
virtual disk.
Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.
Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the system supports a
global setting that prevents these objects from being deleted if the system detects that they
have recent I/O activity. When you delete a volume, the system checks to verify whether it is
part of a host mapping, FlashCopy mapping, or remote-copy relationship. In these cases, the
system fails to delete the volume, unless the -force parameter is specified. Using the -force
parameter can lead to unintentional deletions of volumes that are still active. Active means
that the system detected recent I/O activity to the volume from any host.
Write-through mode
Write-through mode is a process in which data is written to a storage device at the same time
that the data is cached.
We do not provide technical details or implementation guidelines in this appendix. For more
information, see IBM SAN Volume Controller Enhanced Stretched Cluster with VMware,
SG24-8211.
For more information about Enhanced Stretched Cluster prerequisites, see this website:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome
Detailed guidance how to configure the SVC in a stretched cluster configuration and its
integration with the VMware environment is described in IBM SAN Volume Controller
Enhanced Stretched Cluster with VMware, SG24-8211.
The implementation scenarios of the SVC stretched cluster in an AIX virtualized or clustered
environment are available in IBM SAN Volume Controller Stretched Cluster with PowerVM
and PowerHA, SG24-8142.
With SVC 6.3, IBM offers significant enhancements for a Split I/O Group in one of the
following configurations:
No inter-switch link (ISL) configuration:
– Passive Wavelength Division Multiplexing (WDM) devices can be used between both
sites.
– No ISLs can be used between the SVC nodes (similar to the SVC 5.1 supported
configuration).
– The distance extension is to up to 40 km (24.8 miles).
ISL configuration:
– ISLs are allowed between the SVC nodes (not allowed with releases earlier than 6.3).
– The maximum distance is similar to Metro Mirror (MM) distances.
– The physical requirements are similar to MM requirements.
– ISL distance extension is allowed with active and passive WDM devices.
SVC 7.4 further exploits the options for Enhanced Stretched Cluster by the automatic
selection of quorum disks, and the placement of one quorum disk in each of the three sites.
Users can still manually select quorum disks in each of the three sites if they want.
Non-ISL configuration
In a non-ISL configuration, each IBM SVC I/O Group consists of two independent SVC nodes.
In contrast to a standard SVC environment, nodes from the same I/O Group are not placed
close together; instead, they are distributed across two sites. If a node fails, the other node in
the same I/O Group takes over the workload, which is standard in an SVC environment.
Volume mirroring provides a consistent data copy in both sites. If one storage subsystem fails,
the remaining subsystem processes the I/O requests. The combination of SVC node
distribution in two independent data centers and a copy of data in two independent data
centers creates a new level of availability, the stretched cluster.
All SVC nodes and the storage system in a single site might fail; the other SVC nodes take
over the server load by using the remaining storage systems. The volume ID, behavior, and
assignment to the server are still the same. No server reboot, no failover scripts, and
therefore no script maintenance are required.
However, you must consider that a stretched cluster typically requires a specific setup and
might exhibit substantially reduced performance. In a stretched cluster environment, the SVC
nodes from the same I/O Group are in two sites. A third quorum location is required for
handling “split brain” scenarios.
Figure C-1 on page 905 shows an example of a non-ISL stretched cluster configuration as it
is supported in SVC V5.1.
904 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure C-1 Standard SVC environment that uses volume mirroring
The stretched cluster uses SVC volume mirroring functionality. Volume mirroring allows the
creation of one volume with two copies of MDisk extents; no two volumes have the same data
on them. The two data copies can be in different MDisk groups. Therefore, volume mirroring
can minimize the effect on volume availability if one set of MDisks goes offline. The
resynchronization between both copies after recovering from a failure is incremental; the SVC
starts the resynchronization process automatically.
As with a standard volume, each mirrored volume is owned by one I/O Group with a preferred
node. Therefore, the mirrored volume goes offline if the whole I/O Group goes offline. The
preferred node performs all I/O operations, which mean reads and writes. The preferred node
can be set manually.
The quorum disk keeps the status of the mirrored volume. The last status (in sync or not in
sync) and the definitions of the primary and secondary volume copy are saved there.
Therefore, an active quorum disk is required for volume mirroring. To ensure data
consistency, the SVC disables all mirrored volumes if no access exists to any quorum disk
candidate.
In many cases, no independent third site is available. It is possible to use an existing building
or computer room from the two main sites to create a third, independent failure domain.
As shown in Figure C-1 on page 905, the setup is similar to a standard SVC environment, but
the nodes are distributed to two sites. The GUI representation of a stretched cluster is
illustrated in Figure C-2.
The SVC nodes and data are equally distributed across two separate sites with independent
power sources, which are named as separate failure domains (Failure Domain 1 and Failure
Domain 2). The quorum disk is in a third site with a separate power source (Failure Domain
3).
Each I/O Group requires four dedicated fiber-optic links between site 1 and site 2.
If the non-ISL configuration is implemented over a 10 km (6.2 mile) distance, passive WDM
devices (without power) can be used to pool multiple fiber-optic links with different
wavelengths in one or two connections between both sites. Small form-factor pluggables
(SFPs) with different wavelengths or “colored SFPs”, that is, SFPs that are used in Coarse
Wave Division Multiplexing (CWDM) devices are required here.
The maximum distance between both major sites is limited to 40 km (24.8 miles).
Because we must prevent the risk of burst traffic (because of the lack of buffer-to-buffer
credits), the link speed must be limited. The link speed depends on the cable length between
the nodes in the same I/O Group, as shown in Table C-1 on page 907.
906 Implementing the IBM System Storage SAN Volume Controller V7.4
Table C-1 SVC code level lengths and speed
SVC code level Minimum length Maximum length Maximum link speed
With ISLs between SVC nodes, the maximum distance is similar to Metro Mirror distances
(300 km or 186.4 miles). The physical requirements are similar to Metro Mirror requirements,
with an ISL distance extension for active and passive WDM.
The quorum disk at the third site must be FC-attached. Fibre Channel over IP (FCIP) can be
used if the round-trip delay time to the third site is always less than 80 ms, which is 40 ms in
each direction.
Table C-2 provides an overview of features by the SVC stretched cluster in each code
version.
Non-ISL stretched cluster; separate Yes Yes Yes Yes Yes Yes Yes
links between SVC nodes and remote
SAN switches; up to 10 km (6.2 miles);
passive CWDM and passive dense
wavelength division multiplexing
(DWDM)
Dynamic quorum disk V2 N/A Yes Yes Yes Yes Yes Yes
Non-ISL stretched cluster up to 40 km N/A Per Yes Yes Yes Yes Yes
(24.8 miles) quote
ISL stretched cluster with private and N/A N/A Yes Yes Yes Yes Yes
public fabric: up to 300 km (186.4 miles)
Active DWDMs and CWDMs for N/A N/A Yes Yes Yes Yes Yes
non-ISL and ISL stretched cluster
ISL stretched cluster using Fibre N/A N/A N/A Yes Yes Yes Yes
Channel over Ethernet (FCoE) ports for
private fabrics
Support of eight FC ports per SVC node N/A N/A N/A N/A Per Yes Yes
quote
Enhanced mode (site awareness) N/A N/A N/A N/A N/A N/A Yes
The best performance server in site 1 must access the volumes in site 1 (preferred node and
primary copy in site 1). SVC volume mirroring copies the data to storage 1 and storage 2. A
similar setup must be implemented for the servers in site 2 with access to the SVC node in
site 2.
The configuration that is shown in Figure C-3 covers the following failover cases:
Power off FC switch 1: FC switch 2 takes over the load and routes I/O to SVC node 1 and
SVC node 2.
Power off SVC node 1: SVC node 2 takes over the load and serves the volumes to the
server. SVC node 2 changes the cache mode to write-through to avoid data loss in case
SVC node 2 fails, as well.
Power off storage 1: The SVC waits a short time (15 - 30 seconds), pauses volume copies
on storage 1, and continues I/O operations by using the remaining volume copies on
storage 2.
Power off site 1: The server no longer has access to the local switches, which causes the
loss of access. You optionally can avoid this loss of access by using more fiber-optic links
between site 1 and site 2 for server access.
908 Implementing the IBM System Storage SAN Volume Controller V7.4
The same scenarios are valid for site 2 and similar scenarios apply in a mixed failure
environment, for example, the failure of switch 1, SVC node 2, and storage 2. No manual
failover or failback activities are required because the SVC performs the failover or failback
operation.
The use of AIX Live Partition Mobility or VMware vMotion can increase the number of use
cases significantly. Online system migrations are possible, including running virtual machines
and applications. Online system migrations are an acceptable functionality to handle
maintenance operations in an appropriate way.
Advantages
A non-ISL configuration includes the following advantages:
The business continuity solution is distributed across two independent data centers.
The configuration is similar to a standard SVC clustered system.
Limited hardware effort: Passive WDM devices can be used, but are not required.
Requirements
A non-ISL configuration includes the following requirements:
Four independent fiber-optic links for each I/O Group between both data centers.
Long-wave SFPs with support over long distance for direct connection to remote SAN.
Optional usage of passive WDM devices.
Passive WDM device: No power is required for operation.
“Colored SFPs” to make different wavelength available.
“Colored SFPs” must be supported by the switch vendor.
Two independent fiber-optic links between site 1 and site 2 are recommended.
Third site for quorum disk placement.
Quorum disk storage system must use FC for attachment with similar requirements, such
as Metro Mirror storage (80 ms round-trip delay time, which is 40 ms in each direction).
When possible, use two independent fiber-optic links between site 1 and 2.
Bandwidth reduction
Buffer credits, which are also called buffer-to-buffer credits, are used as a flow control method
by FC technology and represent the number of frames that a port can store.
These guidelines give the minimum numbers. The performance drops if insufficient buffer
credits exist, according to the link distance and link speed, as shown in Table C-3 on
page 910.
The number of buffer-to-buffer credits that is provided by an SVC FC host bus adapter (HBA)
is limited. An HBA of a 2145-CF8 node provides 41 buffer credits, which are sufficient for a
10 km (6.2 miles) distance at 8 Gbps. The SVC adapters in all earlier models provide only
eight buffer credits, which are enough only for a 4 km (2.4 miles) distance with a 4 Gbps link
speed. These numbers are determined by the hardware of the HBA and cannot be changed.
We suggest that you use 2145-CF8 or CG8 nodes for distances longer than 4 km (2.4 miles)
to provide enough buffer-to-buffer credits at a reasonable FC speed.
910 Implementing the IBM System Storage SAN Volume Controller V7.4
The stretched cluster configuration that is shown in Figure C-4 on page 910 supports
distances of up to 300 km (186.4 miles), which is the same as the recommended distance for
Metro Mirror.
Data is written by the preferred node to the local and remote storage. The Small Computer
System Interface (SCSI) write protocol results in two round trips. This latency is hidden from
the application by the write cache.
The stretched cluster is often used to move the workload between servers at separate sites.
VMotion or the equivalent can be used to move applications between servers; therefore,
applications no longer necessarily issue I/O requests to the local SVC nodes.
SCSI write commands from hosts to remote SVC nodes result in another two round trips’
worth of latency that is visible to the application. For stretched cluster configurations in a
long-distance environment, we advise that you use the local site for host I/O. Certain switches
and distance extenders use extra buffers and proprietary protocols to eliminate one of the
round trip’s worth of latency for SCSI write commands.
These devices are supported for use with the SVC. They do not benefit or affect inter-node
communication; however, they benefit the host to remote SVC I/Os and the SVC to remote
storage controller I/Os.
Requirements
A stretched cluster with ISL configuration must meet the following requirements:
Four independent, extended SAN fabrics are shown in Figure C-4 on page 910. Those
fabrics are named Public SAN1, Public SAN2, Private SAN1, and Private SAN2. Each
Public or Private SAN can be created with a dedicated FC switch or director, or they can
be a virtual SAN in a CISCO or Brocade FC switch or director.
Two ports per SVC node attach to the private SANs.
Two ports per SVC node attach to the public SANs.
SVC volume mirroring exists between site 1 and site 2.
Hosts and storage attach to the public SANs.
The third site quorum disk attaches to the public SANs.
Figure C-5 on page 912 shows the possible configurations with a virtual SAN.
912 Implementing the IBM System Storage SAN Volume Controller V7.4
Use a third site to house a quorum disk. Connections to the third site can be through FCIP
because of the distance (no FCIP or FC switches were shown in the previous layouts for
simplicity). In many cases, no independent third site is available.
It is possible to use an existing building from the two main sites to create a third,
independent failure domain, but you have the following considerations:
– The third failure domain needs an independent power supply or uninterruptible power
supply. If the hosting site failed, the third failure domain needs to continue to operate.
– Each site (failure domain) must be placed in a separate fire compartment.
– FC cabling must not go through another site (failure domain). Otherwise, a fire in one
failure domain destroys the links (and breaks the access) to the SVC quorum disk.
Applying these considerations, the SVC clustered system can be protected, although two
failure domains are in the same building. Consider an IBM Advanced Technical Support
(ATS) review or processing a request for price quotation (RRQ)/Solution for Compliance in
a Regulated Environment (SCORE) to review the proposed configuration.
The storage system that provides the quorum disk at the third site must support extended
quorum disks. Storage systems that provide extended quorum support are available at this
website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003907
Four active/passive WDMs, two for each site, are needed to extend the public and private
SAN over a distance.
Place independent storage systems at the primary and secondary sites. Use volume
mirroring to mirror the host data between storage systems at the two sites.
The SVC nodes that are in the same I/O Group must be in two remote sites.
More information
For more information about the SVC stretched cluster and Enhanced Stretched Cluster,
including planning, implementation, configuration steps, and troubleshooting, see the
following resources:
IBM SAN Volume Controller Enhanced Stretched Cluster with VMware, SG24-8211
IBM SAN Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142
IBM System Storage SAN Volume Controller Best Practices and Performance Guidelines,
SG24-7521
IBM SAN Volume Controller Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
IBM SAN Volume Controller 2145-DH8 Introduction and Implementation, SG24-8229
Implementing the IBM Storwize V7000 Gen2, SG24-8244
Implementing the IBM System Storage SAN Volume Controller V7.2, SG24-7933
Implementing the IBM Storwize V7000 V7.2, SG24-7938
IBM b-type Gen 5 16 Gbps Switches and Network Advisor, SG24-8186
Introduction to Storage Area Networks and System Networking, SG24-5470
IBM SAN Volume Controller and IBM FlashSystem 820: Best Practices and Performance
Capabilities, REDP-5027
Implementing the IBM SAN Volume Controller and FlashSystem 820, SG24-8172
Implementing IBM FlashSystem 840, SG24-8189
IBM FlashSystem in IBM PureFlex System Environments, TIPS1042
IBM FlashSystem 840 Product Guide, TIPS1079
IBM FlashSystem 820 Running in an IBM StorwizeV7000 Environment, TIPS1101
Implementing FlashSystem 840 with SAN Volume Controller, TIPS1137
IBM FlashSystem V840 Enterprise Performance Solution, TIPS1158
IBM Midrange System Storage Implementation and Best Practices Guide, SG24-6363
IBM System Storage b-type Multiprotocol Routing: An Introduction and Implementation,
SG24-7544
IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
Tivoli Storage Productivity Center for Replication for Open Systems, SG24-8149
Tivoli Storage Productivity Center V5.2 Release Guide, SG24-8204
Implementing an IBM b-type SAN with 8 Gbps Directors and Switches, SG24-6116
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Referenced websites
These websites are also relevant as further information sources:
IBM Storage home page:
http://www.storage.ibm.com
IBM site to download SSH for AIX:
http://oss.software.ibm.com/developerworks/projects/openssh
IBM Tivoli Storage Area Network Manager site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe
tworkManager.html
916 Implementing the IBM System Storage SAN Volume Controller V7.4
IBM TotalStorage Virtualization home page:
http://www-1.ibm.com/servers/storage/software/virtualization/index.html
SAN Volume Controller supported platform:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
SAN Volume Controller Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome
Cygwin Linux-like environment for Windows:
http://www.cygwin.com
Microsoft Knowledge Base Article 131658:
http://support.microsoft.com/support/kb/articles/Q131/6/58.asp
Microsoft Knowledge Base Article 149927:
http://support.microsoft.com/support/kb/articles/Q149/9/27.asp
Open source site for SSH for Windows and Mac:
http://www.openssh.com/windows.html
Sysinternals home page:
http://www.sysinternals.com
Subsystem Device Driver download site:
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
Download site for Windows SSH freeware:
http://www.chiark.greenend.org.uk/~sgtatham/putty
SG24-7933-03
ISBN 0738440469
Printed in U.S.A.
ibm.com/redbooks