You are on page 1of 950

Front cover

Implementing the
IBM System Storage
SAN Volume Controller V7.4

Jon Tate
Frank Enders
Torben Jensen
Hartmut Lonzer
Libor Miklas
Marcin Tabinowski

Redbooks
International Technical Support Organization

Implementing the IBM System Storage SAN Volume


Controller V7.4

April 2015

SG24-7933-03
Note: Before using this information and the product it supports, read the information in “Notices” on
page xvii.

Fourth Edition (April 2015)

This edition applies to IBM SAN Volume Controller software Version 7.4 (includes pre-GA code in some areas)
and the IBM SAN Volume Controller 2145-DH8.

© Copyright International Business Machines Corporation 2015. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii

IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv


April 2015, Fourth Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv

Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Requirements driving storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Benefits of using the IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Latest changes and enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Chapter 2. IBM SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


2.1 Brief history of the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 SAN Volume Controller architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 SAN Volume Controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 SAN Volume Controller terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 SAN Volume Controller components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.3 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.4 Stretched system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.5 MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.6 Quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.7 Disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.8 Storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.10 Easy Tier performance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.11 Evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.12 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.13 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 Volume overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.3 Cache mode and cache-disabled volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.6 Volume I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.6 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.6.1 Use of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

© Copyright IBM Corp. 2015. All rights reserved. iii


2.6.2 iSCSI volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6.3 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6.4 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7 Advanced Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.7.1 Synchronous and asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.7.2 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.7.3 Image mode migration and volume mirroring migration . . . . . . . . . . . . . . . . . . . . 43
2.8 SAN Volume Controller clustered system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.8.2 Split I/O Groups or a stretched cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.8.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.8.4 Clustered system management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.9 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.9.1 Remote authentication through LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.9.2 SAN Volume Controller user names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.9.3 SAN Volume Controller superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.9.4 SAN Volume Controller Service Assistant Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.9.5 SAN Volume Controller roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.9.6 SAN Volume Controller local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.9.7 SAN Volume Controller remote authentication and single sign-on . . . . . . . . . . . . 61
2.10 SAN Volume Controller hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.10.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.10.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.10.3 FCoE interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.11 Flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.11.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.11.2 Flash Drive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.11.3 Flash Drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.11.4 Flash Drives and SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.12 What is new with the SAN Volume Controller 7.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.12.1 SAN Volume Controller 7.4 supported hardware list, device driver, and firmware
levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.12.2 SAN Volume Controller 7.4.0 new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.13 Useful SAN Volume Controller web links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73


3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 76
3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.3.4 IP Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.3.5 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.3.6 SAN Volume Controller clustered system configuration . . . . . . . . . . . . . . . . . . . 101
3.3.7 Stretched cluster system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.3.8 Storage pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.3.9 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3.10 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.3.11 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

iv Implementing the IBM System Storage SAN Volume Controller V7.4


3.3.12 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.3.13 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . 115
3.3.14 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.4.3 SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . . 121


4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.1.1 Network requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . 123
4.2 Setting up the SAN Volume Controller cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.2.1 Service panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.2.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.2.3 Initiating the cluster from the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.3 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.3.1 Completing the Create Cluster wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.3.2 Post-requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.4 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.4.1 Generating public and private SSH key pairs by using PuTTY. . . . . . . . . . . . . . 154
4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster. . . . . . . . . 157
4.4.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.4.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.4.5 Configuring SSH for IBM AIX clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.5 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.5.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4.5.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169


5.1 Host attachment overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.2 IBM SAN Volume Controller setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.2.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.3.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.3.2 iSCSI nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.3.3 iSCSI qualified name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.3.4 iSCSI setup of the SAN Volume Controller and host server . . . . . . . . . . . . . . . . 179
5.3.5 Volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5.3.6 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5.3.7 Target failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5.3.8 Host failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.4.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 183
5.4.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.4.4 Configuring fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.4.5 Installing the 2145 host attachment support package. . . . . . . . . . . . . . . . . . . . . 185
5.4.6 Subsystem Device Driver Path Control Module . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.4.7 Configuring the assigned volume by using SDDPCM. . . . . . . . . . . . . . . . . . . . . 186
5.4.8 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . . 190
5.4.10 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

Contents v
5.4.11 Running SAN Volume Controller commands from an AIX host system . . . . . . 191
5.5 Microsoft Windows information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.5.1 Configuring Windows Server 2008 and 2012 hosts . . . . . . . . . . . . . . . . . . . . . . 192
5.5.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.5.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 193
5.5.4 Installing and configuring the host adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.5.5 Changing the disk timeout on Windows Server . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.5.6 Installing the SDDDSM multipath driver on Windows . . . . . . . . . . . . . . . . . . . . . 194
5.5.7 Attaching SVC volumes to Microsoft Windows Server 2008 R2 and to Windows
Server 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.5.8 Extending a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.5.9 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.6 Using SAN Volume Controller CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . 211
5.7 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.7.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.7.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 213
5.7.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 213
5.7.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.7.5 Creating free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.7.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.8 Specific Linux (on x86/x86_64) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.8.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.8.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.9 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.9.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.9.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 227
5.9.3 HBAs for hosts that are running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.9.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.9.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.9.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.9.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.9.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.9.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . . 232
5.9.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.9.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.10 Sun Solaris hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.10.1 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.11 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.11.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 237
5.11.2 Supported multipath solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.11.3 Coexistence of SDD and PVLinks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.11.4 Using an IBM SAN Volume Controller volume as a cluster lock disk . . . . . . . . 238
5.11.5 Support for HP-UX with more than eight LUNs. . . . . . . . . . . . . . . . . . . . . . . . . 238
5.12 Using the SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . 238
5.13 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.13.1 SAN Volume Controller storage subsystem attachment guidelines . . . . . . . . . 240

Chapter 6. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241


6.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
6.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

vi Implementing the IBM System Storage SAN Volume Controller V7.4


6.2.1 Migrating multiple extents within a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . 242
6.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 243
6.2.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.2.4 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
6.2.5 Non-disruptive Volume Move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
6.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
6.4 Migrating data from an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
6.4.1 Image mode volume migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
6.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6.5 Data migration for Windows by using the SVC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6.5.1 Windows Server 2008 host system connected directly to the DS 3400 . . . . . . . 257
6.5.2 Adding the SAN Volume Controller between the host system and the
DS 3400 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
6.5.3 Importing the migrated disks into an online Windows Server 2008 host. . . . . . . 273
6.5.4 Adding IBM SAN Volume Controller between the host and the DS 3400 by using the
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
6.5.5 Migrating a volume from managed mode to image mode. . . . . . . . . . . . . . . . . . 279
6.5.6 Migrating the volume from image mode to image mode . . . . . . . . . . . . . . . . . . . 283
6.5.7 Removing image mode data from the IBM SAN Volume Controller . . . . . . . . . . 291
6.5.8 Mapping the free disks onto the Windows Server 2008 server. . . . . . . . . . . . . . 293
6.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.6.1 Connecting the IBM SAN Volume Controller to your SAN fabric . . . . . . . . . . . . 296
6.6.2 Preparing your IBM SAN Volume Controller to virtualize disks. . . . . . . . . . . . . . 297
6.6.3 Moving the LUNs to the IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . 301
6.6.4 Migrating the image mode volumes to managed MDisks . . . . . . . . . . . . . . . . . . 304
6.6.5 Preparing to migrate from the IBM SAN Volume Controller . . . . . . . . . . . . . . . . 307
6.6.6 Migrating the volumes to image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . 310
6.6.7 Removing the LUNs from the IBM SAN Volume Controller . . . . . . . . . . . . . . . . 311
6.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
6.7.1 Connecting the IBM SAN Volume Controller to your SAN fabric . . . . . . . . . . . . 315
6.7.2 Preparing your IBM SAN Volume Controller to virtualize disks. . . . . . . . . . . . . . 316
6.7.3 Moving the LUNs to the IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . 320
6.7.4 Migrating the image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
6.7.5 Preparing to migrate from the IBM SAN Volume Controller . . . . . . . . . . . . . . . . 326
6.7.6 Migrating the managed volumes to image mode volumes . . . . . . . . . . . . . . . . . 328
6.7.7 Removing the LUNs from the IBM SAN Volume Controller . . . . . . . . . . . . . . . . 329
6.8 Migrating AIX SAN disks to SVC volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
6.8.1 Connecting the IBM SAN Volume Controller to your SAN fabric . . . . . . . . . . . . 334
6.8.2 Preparing your IBM SAN Volume Controller to virtualize disks. . . . . . . . . . . . . . 335
6.8.3 Moving the LUNs to the IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . 340
6.8.4 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
6.8.5 Preparing to migrate from the IBM SAN Volume Controller . . . . . . . . . . . . . . . . 344
6.8.6 Migrating the managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
6.8.7 Removing the LUNs from the IBM SAN Volume Controller . . . . . . . . . . . . . . . . 348
6.9 Using IBM SAN Volume Controller for storage migration . . . . . . . . . . . . . . . . . . . . . . 351
6.10 Using volume mirroring and thin-provisioned volumes together . . . . . . . . . . . . . . . . 352
6.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
6.10.2 Volume mirroring with thin-provisioned volumes. . . . . . . . . . . . . . . . . . . . . . . . 354

Contents vii
Chapter 7. Advanced features for storage efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 361
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.2 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.2.1 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
7.2.2 SSD arrays and flash MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
7.2.3 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
7.2.4 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
7.2.5 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
7.2.6 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.2.7 Modifying the Easy Tier setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
7.2.8 Monitoring tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
7.2.9 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
7.3 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
7.3.1 Configuring a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
7.3.2 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
7.3.3 Limitations of virtual capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
7.4 Real-time Compression Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
7.4.1 Common use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
7.4.2 Real-time Compression concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
7.4.3 Random Access Compression Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
7.4.4 Random Access Compression Engine in the SVC software stack . . . . . . . . . . . 394
7.4.5 Data write flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
7.4.6 Data read flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.4.7 Compression of existing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.4.8 Configuring compressed volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
7.4.9 SVC 2145-DH8 node software and hardware updates related to Real-time
Compression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
7.4.10 Software enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
7.4.11 Hardware updates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
7.4.12 Dual RACE component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402

Chapter 8. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405


8.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
8.1.1 Business requirements for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
8.1.2 Backup improvements with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
8.1.3 Restore with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8.1.4 Moving and migrating data with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8.1.5 Application testing with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8.1.6 Host and application considerations to ensure FlashCopy integrity . . . . . . . . . . 408
8.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
8.2.1 FlashCopy and Tivoli Storage FlashCopy Manager . . . . . . . . . . . . . . . . . . . . . . 410
8.3 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
8.4 Implementing the SAN Volume Controller FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . 414
8.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
8.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
8.4.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
8.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
8.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
8.4.6 Interaction and dependency between multiple target FlashCopy mappings. . . . 420
8.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 421
8.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8.4.9 FlashCopy and image mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423

viii Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
8.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
8.4.12 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
8.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
8.4.14 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8.4.15 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8.4.16 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
8.4.17 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
8.4.18 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 432
8.4.19 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
8.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.6.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.6.2 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
8.6.3 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
8.6.4 IP partnership and SVC terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
8.6.5 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
8.6.6 Remote copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
8.6.7 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
8.6.8 Setting up the SVC system IP partnership by using the GUI . . . . . . . . . . . . . . . 454
8.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
8.7.1 Multiple SVC system mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
8.7.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
8.7.3 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
8.7.4 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
8.7.5 Synchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
8.7.6 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.7.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.7.8 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.7.9 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
8.7.10 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
8.7.11 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
8.7.12 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
8.7.13 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
8.7.14 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.7.15 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.7.16 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.7.17 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
8.7.18 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 473
8.7.19 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
8.7.20 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
8.8 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
8.8.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
8.8.2 Listing available SVC system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
8.8.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
8.8.4 SVC system partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
8.8.5 Creating a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . . 486
8.8.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 486
8.8.7 Changing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 486
8.8.8 Changing a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . 487
8.8.9 Starting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 487
8.8.10 Stopping a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 487
8.8.11 Starting a Metro Mirror/Global Mirror Consistency Group. . . . . . . . . . . . . . . . . 488

Contents ix
8.8.12 Stopping a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . 488
8.8.13 Deleting a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 488
8.8.14 Deleting a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . . 488
8.8.15 Reversing a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 489
8.8.16 Reversing a Metro Mirror/Global Mirror Consistency Group . . . . . . . . . . . . . . . 489
8.9 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
8.9.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
8.9.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491

Chapter 9. SAN Volume Controller operations using the command-line interface. . 493
9.1 Normal operations by using CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
9.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
9.2 New commands and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
9.3 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 500
9.3.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
9.3.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.3.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.3.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
9.3.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
9.3.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.3.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.3.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
9.3.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
9.3.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.3.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.3.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
9.3.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
9.3.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
9.3.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.4.1 Creating an FC-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.4.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
9.4.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.4.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.4.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.4.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
9.5 Working with the Ethernet port for iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
9.6 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
9.6.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
9.6.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
9.6.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
9.6.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
9.6.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
9.6.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
9.6.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
9.6.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
9.6.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.6.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.6.11 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
9.6.12 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.6.13 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.6.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
9.6.15 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 537

x Implementing the IBM System Storage SAN Volume Controller V7.4


9.6.16 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
9.6.17 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
9.6.18 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . . 538
9.6.19 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . . 539
9.6.20 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . . 539
9.6.21 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . . 540
9.6.22 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . 541
9.6.23 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . . 541
9.7 Scripting under the CLI for SAN Volume Controller task automation . . . . . . . . . . . . . 543
9.7.1 Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
9.8 SAN Volume Controller advanced operations by using the CLI . . . . . . . . . . . . . . . . . 547
9.8.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
9.8.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
9.9 Managing the clustered system by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
9.9.1 Viewing clustered system properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
9.9.2 Changing system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
9.9.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
9.9.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
9.9.5 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
9.9.6 Setting the clustered system time zone and time . . . . . . . . . . . . . . . . . . . . . . . . 555
9.9.7 Starting statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
9.9.8 Determining the status of a copy operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
9.9.9 Shutting down a clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
9.10 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
9.10.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
9.10.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
9.10.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
9.10.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
9.10.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
9.11 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
9.11.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
9.11.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
9.11.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
9.11.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
9.12 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
9.12.1 Managing users by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
9.12.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
9.12.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
9.12.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
9.13 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
9.13.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
9.13.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
9.13.3 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
9.13.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
9.13.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 572
9.13.6 Preparing (pre-triggering) the FlashCopy Consistency Group . . . . . . . . . . . . . 573
9.13.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
9.13.8 Starting (triggering) the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . 575
9.13.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
9.13.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
9.13.11 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 577
9.13.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
9.13.13 Deleting the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 578

Contents xi
9.13.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . 579
9.13.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
9.13.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
9.14 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
9.14.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
9.14.2 Creating a SAN Volume Controller partnership between ITSO_SVC2 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
9.14.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
9.14.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
9.14.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 592
9.14.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
9.14.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
9.14.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
9.14.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
9.14.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 595
9.14.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 596
9.14.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 597
9.14.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . . 598
9.14.14 Changing the copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . 598
9.14.15 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 599
9.14.16 Switching the copy direction for a Metro Mirror Consistency Group . . . . . . . . 600
9.14.17 Creating a SAN Volume Controller partnership among clustered systems. . . 601
9.14.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
9.15 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
9.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
9.15.2 Creating a SAN Volume Controller partnership between ITSO_SVC2 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
9.15.3 Changing link tolerance and system delay simulation . . . . . . . . . . . . . . . . . . . 610
9.15.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 611
9.15.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
9.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 613
9.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
9.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 614
9.15.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
9.15.10 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
9.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
9.15.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 617
9.15.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 617
9.15.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 618
9.15.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . . 619
9.15.16 Changing the direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
9.15.17 Switching the copy direction for a Global Mirror relationship . . . . . . . . . . . . . 620
9.15.18 Switching the copy direction for a Global Mirror Consistency Group . . . . . . . 621
9.15.19 Changing a Global Mirror relationship to the cycling mode. . . . . . . . . . . . . . . 622
9.15.20 Creating the thin-provisioned Change Volumes . . . . . . . . . . . . . . . . . . . . . . . 624
9.15.21 Stopping the stand-alone remote copy relationship . . . . . . . . . . . . . . . . . . . . 624
9.15.22 Setting the cycling mode on the stand-alone remote copy relationship . . . . . 625
9.15.23 Setting the Change Volume on the master volume. . . . . . . . . . . . . . . . . . . . . 625
9.15.24 Setting the Change Volume on the auxiliary volume . . . . . . . . . . . . . . . . . . . 626
9.15.25 Starting the stand-alone relationship in the cycling mode. . . . . . . . . . . . . . . . 626
9.15.26 Stopping the Consistency Group to change the cycling mode . . . . . . . . . . . . 627
9.15.27 Setting the cycling mode on the Consistency Group . . . . . . . . . . . . . . . . . . . 628
9.15.28 Setting the Change Volume on the master volume relationships of the Consistency

xii Implementing the IBM System Storage SAN Volume Controller V7.4
Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
9.15.29 Setting the Change Volumes on the auxiliary volumes. . . . . . . . . . . . . . . . . . 630
9.15.30 Starting the Consistency Group CG_W2K3_GM in the cycling mode . . . . . . 631
9.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
9.16.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
9.16.2 Running the maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
9.16.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
9.16.4 Setting the syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
9.16.5 Configuring error notification by using an email server . . . . . . . . . . . . . . . . . . . 641
9.16.6 Analyzing the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
9.16.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
9.16.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
9.17 Backing up the SAN Volume Controller system configuration . . . . . . . . . . . . . . . . . 647
9.17.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
9.18 Restoring the SAN Volume Controller clustered system configuration . . . . . . . . . . . 649
9.18.1 Deleting the configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
9.19 Working with the SAN Volume Controller quorum MDisks . . . . . . . . . . . . . . . . . . . . 650
9.19.1 Listing the SAN Volume Controller quorum MDisks . . . . . . . . . . . . . . . . . . . . . 650
9.19.2 Changing the SAN Volume Controller quorum MDisks. . . . . . . . . . . . . . . . . . . 650
9.20 Working with the Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
9.20.1 SAN Volume Controller CLI Service Assistant menu . . . . . . . . . . . . . . . . . . . . 651
9.21 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
9.22 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654

Chapter 10. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . 655
10.1 Normal SVC operations using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
10.1.1 Introduction to the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
10.1.2 Content view organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
10.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
10.2 Monitoring menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
10.2.1 System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
10.2.2 System details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
10.2.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
10.2.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
10.3 Working with external disk controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.3.1 Viewing the disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.3.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.3.3 Site awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
10.3.4 Discovering MDisks from the external panel. . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.4 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.4.1 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
10.4.2 Creating storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
10.4.3 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
10.4.4 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
10.5 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
10.5.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
10.5.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
10.5.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
10.5.4 Assigning MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
10.5.5 Unassigning MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
10.6 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
10.7 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
10.7.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687

Contents xiii
10.7.2 Adding a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
10.7.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
10.7.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
10.7.5 Creating or modifying a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
10.7.6 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
10.7.7 Deleting all host mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
10.8 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
10.8.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
10.8.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
10.8.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
10.8.4 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
10.8.5 Modifying thin-provisioned or compressed volume properties . . . . . . . . . . . . . 710
10.8.6 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
10.8.7 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
10.8.8 Deleting all host mappings for a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
10.8.9 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
10.8.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
10.8.11 Shrinking the real capacity of a thin-provisioned or compressed volume . . . . 721
10.8.12 Expanding the real capacity of a thin-provisioned or compressed volume . . . 723
10.8.13 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
10.8.14 Adding a mirrored copy to an existing volume . . . . . . . . . . . . . . . . . . . . . . . . 727
10.8.15 Deleting a mirrored copy from a volume mirror. . . . . . . . . . . . . . . . . . . . . . . . 729
10.8.16 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
10.8.17 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
10.8.18 Migrating to a thin-provisioned volume by using volume mirroring . . . . . . . . . 732
10.8.19 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
10.8.20 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 735
10.8.21 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
10.9 Copy Services and managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
10.9.1 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
10.9.2 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
10.9.3 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
10.9.4 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
10.9.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
10.9.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 751
10.9.7 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
10.9.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 756
10.9.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 757
10.9.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
10.9.11 Renaming a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
10.9.12 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
10.9.13 Deleting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
10.9.14 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
10.9.15 Starting the FlashCopy copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
10.9.16 Stopping the FlashCopy copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
10.10 Copy Services: Managing remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
10.10.1 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
10.10.2 Creating a Fibre Channel partnership between two remote SVC systems . . . 767
10.10.3 Creating an IP partnership between remote SVC systems. . . . . . . . . . . . . . . 770
10.10.4 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . 772
10.10.5 Creating a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
10.10.6 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
10.10.7 Renaming a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783

xiv Implementing the IBM System Storage SAN Volume Controller V7.4
10.10.8 Moving a stand-alone remote copy relationship to a Consistency Group . . . . 784
10.10.9 Removing a remote copy relationship from a Consistency Group . . . . . . . . . 785
10.10.10 Starting a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
10.10.11 Starting a remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 787
10.10.12 Switching the copy direction for a remote copy relationship . . . . . . . . . . . . . 789
10.10.13 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . 791
10.10.14 Stopping a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
10.10.15 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
10.10.16 Deleting stand-alone remote copy relationships . . . . . . . . . . . . . . . . . . . . . . 796
10.10.17 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
10.11 Managing the SAN Volume Controller clustered system by using the GUI. . . . . . . 798
10.11.1 System status information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
10.11.2 View I/O Groups and their associated nodes . . . . . . . . . . . . . . . . . . . . . . . . . 800
10.11.3 View SVC clustered system properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
10.11.4 Renaming the SAN Volume Controller clustered system . . . . . . . . . . . . . . . . 803
10.11.5 Renaming the site information of the nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 805
10.11.6 Rename a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
10.11.7 Shutting down the SAN Volume Controller clustered system . . . . . . . . . . . . . 807
10.11.8 Power off a single node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
10.12 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
10.12.1 Updating system software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
10.12.2 Update drive software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
10.13 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
10.14 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
10.14.1 Viewing node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
10.14.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
10.14.3 Adding a node to the SAN Volume Controller clustered system. . . . . . . . . . . 817
10.14.4 Removing a node from the SAN Volume Controller clustered system . . . . . . 820
10.15 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
10.15.1 Events panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
10.15.2 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
10.15.3 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
10.16 User management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
10.16.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
10.16.2 Modifying the user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
10.16.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
10.16.4 Removing a user SSH public key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
10.16.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
10.16.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
10.16.7 Modifying the user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
10.16.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
10.16.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
10.17 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
10.17.1 Configuring the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
10.17.2 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
10.17.3 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
10.17.4 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
10.17.5 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
10.17.6 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
10.17.7 System options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
10.17.8 Date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
10.17.9 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
10.17.10 Setting GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857

Contents xv
10.18 Upgrading the SAN Volume Controller software . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
10.18.1 Precautions before the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
10.18.2 SAN Volume Controller upgrade test utility. . . . . . . . . . . . . . . . . . . . . . . . . . . 859
10.18.3 Upgrade procedure from version 7.3.x.x to version 7.4.x.x. . . . . . . . . . . . . . . 860
10.18.4 Upgrade procedure from version 7.4.x.x to 7.4.y.y . . . . . . . . . . . . . . . . . . . . . 867

Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 875


SAN Volume Controller performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
SAN Volume Controller performance perspectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
Real-time performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
Performance data collection and Tivoli Storage Productivity Center for Disk . . . . . . . . 886

Appendix B. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889


Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 890

Appendix C. SAN Volume Controller stretched cluster . . . . . . . . . . . . . . . . . . . . . . . . 903


Stretched cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
Non-ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
Inter-switch link configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 913

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 916
Referenced websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 916
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917

xvi Implementing the IBM System Storage SAN Volume Controller V7.4
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2015. All rights reserved. xvii


Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® FlashSystem™ Redbooks®
AIX 5L™ Global Technology Services® Redbooks (logo) ®
DB2® GPFS™ Smarter Planet®
developerWorks® HyperSwap® Storwize®
DS4000® IBM® System p®
DS5000™ IBM FlashSystem™ System Storage®
DS6000™ POWER® Tivoli®
DS8000® Power Systems™ WebSphere®
Easy Tier® pureScale® XIV®
FlashCopy® Real-time Compression™

The following terms are trademarks of other companies:

Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, or service names may be trademarks or service marks of others.

xviii Implementing the IBM System Storage SAN Volume Controller V7.4
IBM REDBOOKS PROMOTIONS

IBM Redbooks promotions

Find and read thousands of


IBM Redbooks publications
Search, bookmark, save and organize favorites
Get up-to-the-minute Redbooks news and announcements
Link to the latest Redbooks blogs and videos

Get the latest version of the Redbooks Mobile App

Download
Android
iOS

Now

Promote your business


in an IBM Redbooks
publication
®
Place a Sponsorship Promotion in an IBM
®
Redbooks publication, featuring your business
or solution with a link to your web site.

Qualified IBM Business Partners may place a full page


promotion in the most popular Redbooks publications.
Imagine the power of being seen by users who download ibm.com/Redbooks
millions of Redbooks publications each year! About Redbooks Business Partner Programs
THIS PAGE INTENTIONALLY LEFT BLANK
Preface

This IBM® Redbooks® publication is a detailed technical guide to the IBM System
Storage® SAN Volume Controller Version 7.4.
The SAN Volume Controller (SVC) is a virtualization appliance solution, which maps
virtualized volumes that are visible to hosts and applications to physical volumes on storage
devices. Each server within the storage area network (SAN) has its own set of virtual storage
addresses that are mapped to physical addresses. If the physical addresses change, the
server continues running by using the same virtual addresses that it had before. Therefore,
volumes or storage can be added or moved while the server is still running.

The IBM virtualization technology improves the management of information at the “block”
level in a network, which enables applications and servers to share storage devices on a
network.

This book is intended for readers who want to implement the SVC at a 7.4 release level with
minimal effort.

Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.

Jon Tate is a Project Manager for IBM System Storage SAN


Solutions at the International Technical Support Organization
(ITSO), San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level 2
support for IBM storage products. Jon has 28 years of
experience in storage software and management, services,
and support, and is both an IBM Certified IT Specialist and an
IBM SAN Certified Specialist. He is also the UK Chairman of
the Storage Networking Industry Association (SNIA).

Frank Enders has worked for the last seven years for EMEA
IBM System Storage SAN Volume Controller and V7k Level 2
support in Mainz, Germany, and his duties include pre- and
post-sales support. He has worked for IBM Germany for 20
years and started as a technician in disk production for IBM
Mainz, and changed to magnetic head production four years
later. When IBM closed disk production in Mainz in 2001, he
also changed his role and continued working for IBM within
ESCC Mainz as a team member of the Installation Readiness
team for products, such as the IBM DS8000®, IBM DS6000™,
and the IBM System Storage SAN Volume Controller. During
that time, he studied for four years to gain a diploma in
Electrical Engineering.

© Copyright IBM Corp. 2015. All rights reserved. xxi


Torben Jensen is an IT Specialist at IBM Global Technology
Services®, Copenhagen, Denmark. He joined IBM in 1999 for
an apprenticeship as an IT-System Supporter. From 2001 to
2005, he was the Client Representative for IBM Internal Client
platforms in Denmark. Torben started work with the SAN/DISK
for open systems department in March 2005. He provides daily
and ongoing support to clients and colleagues, and works with
SAN designs and solutions for clients. Torben has written
several IBM Redbooks publications in the Storage virtualization
area in the past.

Hartmut Lonzer is a Client Technical Specialist for IBM


Germany. He works in the IBM Germany headquarters in
Ehningen. Hartmut is a member of Technical Sales Support for
Storage in Germany, Austria, and Switzerland. His main focus
is on the IBM SAN Volume Controller and the IBM Storwize®
Family. His experience with the IBM SAN Volume Controller
and Storwize products goes back to the beginning of these
products. Hartmut has been with IBM in various technical roles
for 37 years.

Libor Miklas is an IT Architect working at the IBM Integrated


Delivery Center in the Czech Republic. He has10 years of
extensive experience within the IT industry. For the last eight
years, his main focus has been on the data protection solutions
and on storage management and provisioning. Libor and his
team design, implement, and support midrange and enterprise
storage environments for various global and local clients. He is
an IBM Certified Deployment Professional for the IBM Tivoli®
Storage Manager family of products and holds a Masters
degree in Electrical Engineering and Telecommunications.

Marcin Tabinowski works as an IT Specialist in STG Lab


Services in Poland. He has over eight years of experience in
designing and implementing IT solutions based on storage and
IBM POWER® systems. His main responsibilities are
architecting, consulting, implementing, and documenting
projects, including storage systems, SAN networks, POWER
systems, disaster recovery, virtualization, and data migration.
Pre-sales, post-sales, and training are also part of his everyday
duties. Marcin holds many certifications, which span different
IBM storage products and POWER systems. He also holds an
MSC of Computer Science from Wroclaw University of
Technology, Poland.

xxii Implementing the IBM System Storage SAN Volume Controller V7.4
Thanks to the following people for their contributions to this project:

Barry Whyte
Katja Gebuhr
Paul Cashman
Paul Merrison
Nicholas Sunderland
Stephen Wright
John Fairhurst
Trevor Boardman
IBM Hursley, UK

Nick Clayton
IBM Systems & Technology Group, UK

Helen Burton
IBM Systems & Technology Group, Boulder, CO, US

Special thanks to the Brocade Communications Systems staff in San Jose, California, for
their unparalleled support of this residency in terms of equipment and support in many areas:

Silviano Gaona
Sangam Racherla
Brian Steffler
Marcus Thordal
Jim Baldyga
Brocade Communications Systems

Now you can become a published author, too!


Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks

Preface xxiii
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks


򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html

xxiv Implementing the IBM System Storage SAN Volume Controller V7.4
Summary of changes

This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.

Summary of Changes
for SG24-7933-03
for Implementing the IBM System Storage SAN Volume Controller V7.4
as created or updated on April 24, 2015.

April 2015, Fourth Edition


This revision includes the following new and changed information.

New information
򐂰 New hardware
򐂰 IBM Easy Tier®
򐂰 GUI

Changed information
򐂰 GUI
򐂰 Planning
򐂰 Copy services
򐂰 And much more!

© Copyright IBM Corp. 2015. All rights reserved. xxv


xxvi Implementing the IBM System Storage SAN Volume Controller V7.4
1

Chapter 1. Introduction to storage


virtualization
In this chapter, we define the concept of storage virtualization. Then, we present an overview
that describes how you can apply virtualization to help address challenging storage
requirements.

This chapter discusses the following topics:


򐂰 Storage virtualization terminology
򐂰 Requirements driving storage virtualization
򐂰 Latest changes and enhancements
򐂰 Summary

© Copyright IBM Corp. 2015. All rights reserved. 1


1.1 Storage virtualization terminology
Although storage virtualization is a term that is used extensively throughout the storage
industry, it can be applied to various technologies and underlying capabilities. In reality, most
storage devices technically can claim to be virtualized in one form or another. Therefore, we
must start by defining the concept of storage virtualization as it is used in this book.

IBM defines storage virtualization in the following manner:


򐂰 Storage virtualization is a technology that makes one set of resources resemble another
set of resources, preferably with more desirable characteristics.
򐂰 It is a logical representation of resources that is not constrained by physical limitations and
hides part of the complexity. It also adds or integrates new function with existing services
and can be nested or applied to multiple layers of a system.

When we mention storage virtualization, it is important to understand that virtualization can


be implemented at various layers within the I/O stack. We must clearly distinguish between
virtualization at the disk layer and virtualization at the file system layer.

The focus of this publication is virtualization at the disk layer, which is referred to as
block-level virtualization, or the block aggregation layer. A description of file system
virtualization is beyond the scope of this book.

For more information about file system virtualization, see the following resources:
򐂰 IBM General Parallel File System (GPFS™):
http://www.ibm.com/systems/software/gpfs/
򐂰 IBM Scale Out Network Attached Storage, which is based on GPFS:
http://www.ibm.com/systems/storage/network/sonas/

The Storage Networking Industry Association’s (SNIA) block aggregation model provides a
useful overview of the storage domain and its layers, as shown in Figure 1-1 on page 3. It
illustrates three layers of a storage domain: the file, block aggregation, and block subsystem
layers.

The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).

The IBM implementation of a block aggregation solution is the IBM SAN Volume Controller
(SVC). The SVC is implemented as a clustered appliance in the storage network layer. For
more information about the reasons why IBM chose to develop its SVC in the storage network
layer, see Chapter 2, “IBM SAN Volume Controller” on page 9.

2 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 1-1 SNIA block aggregation model1

The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment.

Decoupling means abstracting the physical location of data from the logical representation of
the data. The virtualization engine presents logical entities to the user and internally manages
the process of mapping these entities to the actual location of the physical storage.

The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk up to the
full capacity of a physical disk. A single block of information in this environment is identified by
its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which
is known as a logical block address (LBA).

The term physical disk is used in this context to describe a piece of storage that might be
carved out of a RAID array in the underlying disk subsystem.

Specific to the SVC implementation, the address space that is mapped between the logical
entity is referred to as a volume. The physical disk is referred to as managed disks (MDisks).

Figure 1-2 on page 4 shows an overview of block-level virtualization.

1 Source: Storage Networking Industry Association.

Chapter 1. Introduction to storage virtualization 3


Figure 1-2 Block-level virtualization overview

The server and application are aware of the logical entities only, and they access these
entities by using a consistent interface that is provided by the virtualization layer.

The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy®, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which allows a user to move or migrate data between
physical locations, which are referred to as storage pools.

We refer to block-level storage virtualizations as the cornerstones of virtualization. These


cornerstones of virtualization are the core benefits that a product, such as IBM SAN Volume
Controller, can provide over the traditional directly attached or SAN storage.

The SVC provides the following benefits:


򐂰 The SVC provides online volume migration while applications are running, which is
possibly the greatest single benefit for storage virtualization. This capability allows data to
be migrated on and between the underlying storage subsystems without any impact to the
servers and applications. In fact, this migration is performed without the knowledge of the
servers and applications that it even occurred.
򐂰 The SVC simplifies storage management by providing a single image for multiple
controllers and a consistent user interface for provisioning heterogeneous storage.
򐂰 The SVC provides enterprise-level Copy Services functions. Performing Copy Services
functions within the SVC removes dependencies on the storage subsystems; therefore, it
enables the source and target copies to be on other storage subsystem types.
򐂰 Storage usage can be increased by pooling storage across the SAN.
򐂰 System performance is often improved with the SVC as a result of volume striping across
multiple arrays or controllers and the other cache that it provides.

The SVC delivers these functions in a homogeneous way on a scalable and highly available
platform over any attached storage and to any attached server.

4 Implementing the IBM System Storage SAN Volume Controller V7.4


1.2 Requirements driving storage virtualization
Today, an emphasis exists on an IBM Smarter Planet® and dynamic infrastructure. Therefore,
a storage environment is needed that is as flexible as the application and server mobility.
Business demands change quickly.

The following key client concerns drive storage virtualization:


򐂰 Growth in data center costs
򐂰 Inability of IT organizations to respond quickly to business demands
򐂰 Poor asset usage
򐂰 Poor availability or service levels
򐂰 Lack of skilled staff for storage administration

You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.

But how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
“island” that must be managed separately.

1.2.1 Benefits of using the IBM SAN Volume Controller


The SVC can reduce the number of separate environments that must be managed down to a
single environment. It also provides a single interface for storage management. After the
initial configuration of the storage subsystems, all of the day-to-day storage management
operations are performed from the SVC.

Because the SVC provides advanced functions, such as mirroring and FlashCopy, there is no
need to purchase them again for each new disk subsystem.

Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID disk subsystems. The use of the installed raw capacity in the disk
subsystems shows usage numbers of less than 35%, depending on the RAID level that is
used. A block-level virtualization solution, such as IBM SAN Volume Controller, can allow
capacity usage to increase to approximately 75 - 80%.

With the SVC, free space does not need to be maintained and managed within each storage
subsystem, which further increases capacity usage.

1.3 Latest changes and enhancements


The SVC V7.3 and its related hardware upgrade represent an important milestone in the
product line development with further enhancements in V7.4. The product internal
architecture is significantly rebuilt, enabling the system to break the previous limitations in
terms of scalability and flexibility.

The SVC storage engine model DH8 and SVC Small Form Factor (SFF) Expansion
Enclosure Model 24F deliver increased performance, expanded connectivity, compression
acceleration, and additional internal flash storage capacity.

Chapter 1. Introduction to storage virtualization 5


An SVC Storage Engine Model DH8, which is based on IBM System x server technology,
consists of one Xeon E5 v2 eight-core 2.6 GHz processor and 32 GB of memory. It includes
three 1 Gb Ethernet ports as standard for 1 Gbps iSCSI connectivity and supports up to four
16 Gbps Fibre Channel (FC) I/O adapter cards for 16 Gbps FC or 10 Gbps iSCSI/Fibre
Channel over Ethernet (FCoE) connectivity (Converged Network Adapters). Up to three
8 Gbps native FC cards are supported. It also includes two integrated AC power supplies and
battery units, replacing the uninterruptible power supply feature that was required on the
previous generation storage engine models.

The front view of the two-node cluster based on the 2145-DH8 is shown in Figure 1-3.

Figure 1-3 Front view of 2145-DH8

V7.3 includes the following changes:


򐂰 Significant upgrade of the 2145-DH8 hardware
The new 2145-DH8 introduces a 2U server based on the IBM x3650 M4 series and
integrates the following features:
– Minimum eight-core processors with 32 GB memory for the SVC. Optional secondary
processor with additional 32 GB memory when third I/O card is needed. Secondary
processor is compulsory when IBM Real-time Compression™ is enabled.
– Integrated dual-battery pack as an uninterruptible power supply in a power outage.
External UPS device is no longer needed, avoiding miscabling issues.
– Dual, redundant power supplies, therefore no external power switch is required.
– Removed front panel. Most of its actions were moved to the functionality of the rear
Technician Port (Ethernet port) with enabled Dynamic Host Configuration Protocol
(DHCP) for instant access.
– Two boot drives with data mirrored across drives. The SVC node will still boot in a drive
failure. Dump data is striped for performance reasons.
– Enhanced scalability with up to three PCI Express slot capabilities, which allow users
to install up to three four-port 8 Gbps FC host bus adapters (HBA) (12 ports). It
supports one four-port 10GbE card (iSCSI or FCoE) and one dual-port 12 Gbps
serial-attached SCSI (SAS) card for flash drive expansion unit attachment (model
2145-24F).

6 Implementing the IBM System Storage SAN Volume Controller V7.4


– Improved Real-time Compression engine (RACE) with the processing offloaded to the
secondary dedicated processor and using 36 GB of dedicated memory cache. At a
minimum, one Compression Accelerator card needs to be installed (up to 200
compressed volumes) or two Compression Accelerators allow up to 512 compressed
volumes.
– Optional 2U expansion enclosure 2145-24F with up to 24 flash drives (200, 400, or
800 GB) for tier 0 when IBM Easy Tier function is enabled. The RAID array can be built
across both expansions attached to each node in the I/O Group. The 12 Gbps SAS
four-port adapter needs to be installed in each SVC node.
򐂰 V7.3 includes the following software enhancements:
– Extended functionality of Easy Tier by storage pool balancing mode within the same
tier. It moves or exchanges extents between highly utilized and low-utilized managed
disks (MDisks) within a storage pool, therefore increasing the read and write
performance of the volumes. This function is enabled automatically in the SVC and
does not need any license. It cannot be disabled by the administrator.
– The SVC cache rearchitecture splits the original single cache into upper and lower
caches of different sizes. Upper cache uses up to 256 MB and lower cache uses up to
64 GB of installed memory allocated to both processors (if installed). And, 36 GB of
memory is always allocated for Real-time Compression if enabled.
– Near-instant prepare for FlashCopy due to the presence of lower cache. Multiple
snapshots of the golden image now share cache data (instead of a number of N
copies).

V7.4 incorporates the following changes:


򐂰 Hardware changes:
The SVC introduces a new 16 Gbps FC adapter based on the Emulex Lancer
multiprotocol chip, which offers to the SVC either FC connectivity or Fibre Channel over
Ethernet (FCoE). The adapter can be configured as a two-port 16 Gbps FC, four-port 8
Gbps FC, or four-port 10 GbE profile.
򐂰 Software changes:
– The most noticeable change in SVC V7.4 after the first login is the modified GUI with
the new layout of the system panel with enhanced functions that are available directly
from the welcome window. The concept of the GUI design conforms to the well-known
approach from IBM System Storage XIV® Gen3 and IBM FlashSystem™ 840. It
provides common, unified procedures to manage all these systems in a similar way,
allowing administrators to simplify their operational procedures across all systems.
– Child pools are new objects, which are created from the physical storage pool and
provide most of the functions of managed disk groups (MDiskgrps), for example,
volume creation, but the user can specify the capacity of the child pool at creation. In
previous SVC systems, the disk space of a storage pool is from MDisks, so the
capacity of a storage pool depends on the capacity of the MDisks. The user cannot
freely create a storage pool with a particular capacity that they want. The maximum
number of storage pools remains at 128, and each storage pool can have up to 127
child pools. Child pools can only be created in the command-line interface (CLI);
however, they are shown as child pools with all their differences to parent pools in the
GUI.
– A new level of volume protection prevents users from removing mappings of volumes
that are considered active. Active means that the system has detected recent I/O
activity to the volume from any host within a protection period that is defined by the
user. This behavior is enabled by system-wide policy settings. The detailed volume
view contains the new field that indicates when the volume was last accessed.

Chapter 1. Introduction to storage virtualization 7


– With V7.4, a user can replace a failed flash drive by removing it from the 2145-24F
expansion unit and installing a new replacement drive, without requiring Directed
Maintenance Procedure (DMP) to supervise the action. The user determines that the
fault LED is illuminated for a drive, then they can expect to be able to reseat or replace
the drive in that slot. The system automatically performs the drive hardware validation
tests and promotes the unit into the configuration if these checks pass.
– The 2145-DH8 with SVC code V7.4 and higher supports the T10 Data Integrity Field
between the internal RAID layer and the drives that are attached to supported
enclosures.
– The SVC supports 4096-bytes native drive block size without requiring clients to
change their block size. The SVC supports an intermix of 512 and 4096 drive native
block sizes within an array. The GUI recognizes drives with different block sizes and
represents them with different classes.
– The SVC 2145-DH8 with code V7.4 improves the performance of Real-time
Compression and provides up to double I/O per second (IOPS) on the model DH8
when it is equipped with both Compression Accelerator cards. It introduces two
separate software compression engines (RACE), taking advantage of multi-core
controller architecture. Hardware resources are shared between both RACE engines.
– SVC V7.4 adds virtual LAN (VLAN) support for iSCSI and IP replication. When VLAN is
enabled and its ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication on the SVC, appropriate VLAN settings on the Ethernet network
and servers must also be correctly configured to avoid connectivity issues. After the
VLANs are configured, changes to their settings disrupt the iSCSI or IP replication
traffic to and from the SVC.
– New informational fields are added to the CLI output of the lsportip, lsportfc, and
lsportsas commands, indicating the physical port locations of each logical port in the
system.

The IBM SAN Volume Controller 2145-DH8 ships with preloaded V7.4 software. Downgrading
the software to version 7.2 or lower is not supported. The 2145-DH8 rejects any attempt to
install a version that is lower than 7.3.

1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. The use of storage virtualization as the
foundation for a flexible and reliable storage solution helps enterprises to better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.

The IBM SAN Volume Controller is a mature, eighth-generation virtualization solution that
uses open standards and complies with the SNIA storage model. The SVC is an
appliance-based, in-band block virtualization process in which intelligence (including
advanced storage functions) is migrated from individual storage devices to the storage
network.

SVC can improve the utilization of your storage resources, simplify your storage
management, and improve the availability of your applications.

8 Implementing the IBM System Storage SAN Volume Controller V7.4


2

Chapter 2. IBM SAN Volume Controller


In this chapter, we explain the major concepts underlying the IBM SAN Volume Controller
(SVC).

We present a brief history of the SVC product, and then provide an architectural overview.
After we define SVC terminology, we describe software and hardware concepts and the other
functionalities that are available with the newest release.

Finally, we provide links to websites where you can obtain more information about the SVC.

This chapter includes the following topics:


򐂰 Brief history of the SAN Volume Controller
򐂰 SAN Volume Controller architectural overview
򐂰 SAN Volume Controller terminology
򐂰 SAN Volume Controller components
򐂰 Volume overview
򐂰 iSCSI overview
򐂰 Advanced Copy Services overview
򐂰 SAN Volume Controller clustered system overview
򐂰 User authentication
򐂰 SAN Volume Controller hardware overview
򐂰 Flash drives
򐂰 What is new with SVC 7.4
򐂰 Useful SAN Volume Controller web links

© Copyright IBM Corp. 2015. All rights reserved. 9


2.1 Brief history of the SAN Volume Controller
The IBM implementation of block-level storage virtualization, the IBM System Storage SAN
Volume Controller, is based on an IBM project that was started in the second half of 1999 at
the IBM Almaden Research Center. The project was called COMmodity PArts Storage
System, or COMPASS.

One goal of this project was to create a system that was almost exclusively composed of
off-the-shelf standard parts. As with any enterprise-level storage control system, it had to
deliver a level of performance and availability that was comparable to the highly optimized
storage controllers of previous generations. The idea of building a storage control system that
is based on a scalable cluster of lower performance servers instead of a monolithic
architecture of two nodes is still a compelling idea.

COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.

The first documentation that covered this project was released to the public in 2003 in the
form of the IBM Systems Journal, Vol. 42, No. 2, 2003, “The software architecture of a SAN
storage control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales. The article is available
at this website:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853

The results of the COMPASS project defined the fundamentals for the product architecture.
The first release of the IBM System Storage SAN Volume Controller was announced in July
2003.

Each of the following releases brought new and more powerful hardware nodes, which
approximately doubled the I/O performance and throughput of its predecessors, provided new
functionality, and offered more interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).

The most recently released hardware node, the 2145-DH8, is based on an IBM System x
3650 M4 server technology with the following features:
򐂰 One 2.6 GHz Intel Xeon Processor E5-2650 v2 with eight processor cores (A second
processor is optional.)
򐂰 Up to 64 GB of cache
򐂰 Up to three four-port 8 Gbps Fibre Channel (FC) cards
򐂰 Up to four two-port 16 Gbps FC cards
򐂰 One four-port 10 Gbps iSCSI/Fibre Channel over Ethernet (FCoE) card
򐂰 One 12 Gbps serial-attached SCSI (SAS) Expansion card for an additional two SAS
expansions
򐂰 Three 1 Gbps ports for management and iSCSI host access
򐂰 One Technican port
򐂰 Two battery packs

The SVC node can support up to two external Expansion Enclosures for Flash Cards. These
storage spaces can only be used for the IBM Easy Tier function. No host usage of this
storage space is possible.

10 Implementing the IBM System Storage SAN Volume Controller V7.4


2.2 SAN Volume Controller architectural overview
The SVC is a SAN block aggregation virtualization appliance that is designed for attachment
to various host computer systems.

The following major approaches are used today for the implementation of block-level
aggregation and virtualization:
򐂰 Symmetric: In-band appliance
Virtualization splits the storage that is presented by the storage systems into smaller
chunks that are known as extents. These extents are then concatenated, by using various
policies, to make virtual disks (volumes). With symmetric virtualization, host systems can
be isolated from the physical storage. Advanced functions, such as data migration, can run
without the need to reconfigure the host. With symmetric virtualization, the virtualization
engine is the central configuration point for the SAN. The virtualization engine directly
controls access to the storage and to the data that is written to the storage. As a result,
locking functions that provide data integrity and advanced functions, such as cache and
Copy Services, can be run in the virtualization engine itself. Therefore, the virtualization
engine is a central point of control for device and advanced function management.
Symmetric virtualization allows you to build a firewall in the storage network. Only the
virtualization engine can grant access through the firewall.
Symmetric virtualization can have disadvantages. The main disadvantage that is
associated with symmetric virtualization is scalability. Scalability can cause poor
performance because all input/output (I/O) must flow through the virtualization engine. To
solve this problem, you can use an n-way cluster of virtualization engines that has failover
capacity. You can scale the additional processor power, cache memory, and adapter
bandwidth to achieve the level of performance that you want. Additional memory and
processing power are needed to run advanced services, such as Copy Services and
caching.
The SVC uses symmetric virtualization. Single virtualization engines, which are known as
nodes, are combined to create clusters. Each cluster can contain between two and eight
nodes.
򐂰 Asymmetric: Out-of-band or controller-based
With asymmetric virtualization, the virtualization engine is outside the data path and
performs a metadata-style service. The metadata server contains all the mapping and the
locking tables; the storage devices contain only data. In asymmetric virtual storage
networks, the data flow is separated from the control flow. A separate network or SAN link
is used for control purposes. The metadata server contains all the mapping and locking
tables, and the storage devices contain only data. Because the flow of control is separated
from the flow of data, I/O operations can use the full bandwidth of the SAN. A separate
network or SAN link is used for control purposes. However, there are disadvantages to
asymmetric virtualization.
Asymmetric virtualization can have the following disadvantages:
– Data is at risk to increased security exposures, and the control network must be
protected with a firewall.
– Metadata can become complicated when files are distributed across several devices.
– Each host that accesses the SAN must know how to access and interpret the
metadata. Specific device drivers or agent software must therefore be running on each
of these hosts.
– The metadata server cannot run advanced functions, such as caching or Copy
Services, because it only knows about the metadata and not about the data itself.

Chapter 2. IBM SAN Volume Controller 11


Figure 2-1 shows variations of the two virtualization approaches.

Logical Entity
(Volume)

SAN
SAN
Virtualization

Virtualization

Disk Disk Disk


Sub-system Sub-system Sub-system Disk Subsystem
A C C Z

In-Band: Appliance Out-of-Band: Controller-based


Symmetric Virtualization Asymmetric Virtualization

Figure 2-1 Overview of block-level virtualization architectures

Although these approaches provide essentially the same cornerstones of virtualization,


interesting side-effects can occur, as described next.

The controller-based approach has high functionality, but it fails in terms of scalability or
upgradeability. Because of the nature of its design, no true decoupling occurs with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Data migration issues and questions are challenging, such as how to reconnect the servers to
the new controller, and how to reconnect them online without any effect on your applications.

Be aware that with this approach, you not only replace a controller but also implicitly replace
your entire virtualization solution. In addition to replacing the hardware, updating or
repurchasing the licenses for the virtualization feature, advanced copy functions, and so on
might be necessary.

With a SAN or fabric-based appliance solution that is based on a scale-out cluster


architecture, lifecycle management tasks, such as adding or replacing new disk subsystems
or migrating data between them, are simple. Servers and applications remain online, data
migration occurs transparently on the virtualization platform, and licenses for virtualization
and copy services require no update: that is, they require no other costs when disk
subsystems are replaced.

12 Implementing the IBM System Storage SAN Volume Controller V7.4


Only the fabric-based appliance solution provides an independent and scalable virtualization
platform that can provide enterprise-class copy services and that is open for future interfaces
and protocols. By using the fabric-based appliance solution, you can choose the disk
subsystems that best fit your requirements, and you are not locked into specific SAN
hardware.

For these reasons, IBM chose the SAN or fabric-based appliance approach for the
implementation of the SVC.

The SVC includes the following key characteristics:


򐂰 It is highly scalable, which provides an easy growth path to two-n nodes (grow in a pair of
nodes due to the cluster function).
򐂰 It is SAN interface-independent. It supports FC and FCoE and iSCSI, but it is also open for
future enhancements.
򐂰 It is host-independent for fixed block-based Open Systems environments.
򐂰 It is external storage RAID controller-independent, which provides a continuous and
ongoing process to qualify more types of controllers.
򐂰 It can use disks that are internal to the nodes (solid-state drives (SSDs) in older SVC
systems) or externally direct-attached in Flash Expansions.

On the SAN storage that is provided by the disk subsystems, the SVC can offer the following
services:
򐂰 Creates a single pool of storage
򐂰 Provides logical unit virtualization
򐂰 Manages logical volumes
򐂰 Mirrors logical volumes

The SVC system also provides these functions:


򐂰 Large scalable cache
򐂰 Copy Services
򐂰 IBM FlashCopy (point-in-time copy) function, including thin-provisioned FlashCopy to
make multiple targets affordable
򐂰 Metro Mirror (synchronous copy)
򐂰 Global Mirror (asynchronous copy)
򐂰 Data migration
򐂰 Space management
򐂰 IBM Easy Tier to migrate the most frequently used data to higher or lower-performance
storage
򐂰 Thin-provisioned logical volumes
򐂰 Compressed volumes to consolidate storage

2.2.1 SAN Volume Controller topology


SAN-based storage is managed by the SVC in one or more “pairs” of SVC hardware nodes.
This configuration is referred to as a clustered system or system. These nodes are attached to
the SAN fabric, with RAID controllers and host systems. The SAN fabric is zoned to allow the
SVC to “see” the RAID controllers, and for the hosts to see the SVC.

Chapter 2. IBM SAN Volume Controller 13


The hosts cannot see or operate on the same physical storage (logical unit number (LUN))
from the RAID controller that is assigned to the SVC. Storage controllers can be shared
between the SVC and direct host access if the same LUNs are not shared. The zoning
capabilities of the SAN switch must be used to create distinct zones to ensure that this rule is
enforced.

SAN fabrics can include standard FC, FC over Ethernet, iSCSI over Ethernet, or possible
future types.

Figure 2-2 shows a conceptual diagram of a storage system that uses the SVC. It shows
several hosts that are connected to a SAN fabric or LAN. In practical implementations that
have high-availability requirements (most of the target clients for the SVC), the SAN fabric
“cloud” represents a redundant SAN. A redundant SAN consists of a fault-tolerant
arrangement of two or more counterpart SANs, which provide alternative paths for each
SAN-attached device.

Both scenarios (the use of a single network and the use of two physically separate networks)
are supported for iSCSI-based and LAN-based access networks to the SVC. Redundant
paths to volumes can be provided in both scenarios.

For simplicity, Figure 2-2 shows only one SAN fabric and two zones: host and storage. In a
real environment, it is a preferred practice to use two redundant SAN fabrics. The SVC can be
connected to up to four fabrics. For more information about zoning, see Chapter 3, “Planning
and configuration” on page 73.

Figure 2-2 SVC conceptual and topology overview

A clustered system of SVC nodes that are connected to the same fabric presents logical
disks or volumes to the hosts. These volumes are created from managed LUNs or managed
disks (MDisks) that are presented by the RAID disk subsystems.

14 Implementing the IBM System Storage SAN Volume Controller V7.4


The following two distinct zones are shown in the fabric:
򐂰 A host zone, in which the hosts can see and address the SVC nodes
򐂰 A storage zone, in which the SVC nodes can see and address the MDisks/LUNs that are
presented by the RAID subsystems

Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens
through the SVC nodes. This design is commonly referred to as symmetric virtualization.
LUNs that are not processed by the SVC can still be provided to the hosts.

For iSCSI-based host access, the use of two networks and separating iSCSI traffic within the
networks by using a dedicated virtual local area network (VLAN) path for storage traffic
prevents any IP interface, switch, or target port failure from compromising the host servers’
access to the volumes’ LUNs.

2.3 SAN Volume Controller terminology


To provide a higher level of consistency among IBM storage products, the terminology that is
used starting with the SVC version 7 (and throughout the rest of this book) changed when
compared to previous SVC releases. Table 2-1 summarizes the major changes.

Table 2-1 SVC glossary terms


SVC terminology Previous SVC term Description

Clustered system or system Cluster A collection of nodes that are


placed in pairs (I/O Groups) for
redundancy to provide a single
management interface.

Event Error An occurrence of significance


to a task or system. Events can
include the completion or failure
of an operation, a user action,
or the change in the state of a
process.

Host mapping VDisk-to-host mapping The process of controlling


which hosts have access to
specific volumes within a
clustered system.

Storage pool (pool) Managed disk (MDisk) group A collection of storage that
identifies an underlying set of
resources. These resources
provide the capacity and
management requirements for
a volume or set of volumes.

Thin provisioning (or Space-efficient The ability to define a storage


thin-provisioned) unit (full system, storage pool,
or volume) with a logical
capacity size that is larger than
the physical capacity that is
assigned to that storage unit.
Allocates storage when data is
written to it.

Chapter 2. IBM SAN Volume Controller 15


SVC terminology Previous SVC term Description

Volume Virtual disk (VDisk) A fixed amount of physical or


virtual storage on a data
storage medium (tape or disk)
that supports a form of identifier
and parameter list, such as a
volume label or I/O control.

For more information about the terms and definitions that are used in the SVC environment,
see Appendix B, “Terminology” on page 889.

2.4 SAN Volume Controller components


The SVC product provides block-level aggregation and volume management for attached disk
storage. In simpler terms, the SVC manages a number of back-end storage controllers or
locally attached disks and maps the physical storage within those controllers or disk arrays
into logical disk images, or volumes, that can be seen by application servers and workstations
in the SAN.

The SAN is zoned so that the application servers cannot see the back-end physical storage,
which prevents any possible conflict between the SVC and the application servers that are
trying to manage the back-end storage. The SVC is based on the components that are
described next.

2.4.1 Nodes
Each SVC hardware unit is called a node. The node provides the virtualization for a set of
volumes, cache, and copy services functions. The SVC nodes are deployed in pairs (cluster)
and multiple pairs make up a clustered system or system. A system can consist of 1 - 4 SVC
node pairs.

One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.

Because the nodes are installed in pairs, each node provides a failover function to its partner
node if a node fails.

2.4.2 I/O Groups


Each pair of SVC nodes is also referred to as an I/O Group. An SVC clustered system can
have 1 - 4 I/O Groups.

A specific volume is always presented to a host server by a single I/O Group of the system.
The I/O Group can be changed.

When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O Group in the system. Also, under normal conditions, the I/Os for
that specific volume are always processed by the same node within the I/O Group. This node
is referred to as the preferred node for this specific volume.

16 Implementing the IBM System Storage SAN Volume Controller V7.4


Both nodes of an I/O Group act as the preferred node for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers. A maximum of
2,048 volumes per I/O Group is allowed. In an eight-node cluster, the maximum is 8,192
volumes. However, both nodes also act as failover nodes for their respective partner node
within the I/O Group. Therefore, a node takes over the I/O workload from its partner node, if
required.

Therefore, in an SVC-based environment, the I/O handling for a volume can switch between
the two nodes of the I/O Group. For this reason, it is mandatory for servers that are connected
through FC to use multipath drivers to handle these failover situations.

The SVC I/O Groups are connected to the SAN so that all application servers that are
accessing volumes from this I/O Group have access to this group. Up to 512 host server
objects can be defined per I/O Group. The host server objects can access volumes that are
provided by this specific I/O Group.

If required, host servers can be mapped to more than one I/O Group within the SVC system;
therefore, they can access volumes from separate I/O Groups. You can move volumes
between I/O Groups to redistribute the load between the I/O Groups. Modifying the I/O Group
that services the volume can be done concurrently with I/O operations if the host supports
nondisruptive volume move. It also requires a rescan at the host level to ensure that the
multipathing driver is notified that the allocation of the preferred node changed and the ports
by which the volume is accessed changed. This modification can be done in the situation
where one pair of nodes becomes overused.

2.4.3 System
The system or clustered system consists of 1 - 4 I/O Groups. Certain configuration limitations
are then set for the individual system. For example, the maximum number of volumes that is
supported per system is 8,192 (having a maximum of 2,048 volumes per I/O Group), or the
maximum managed disk that is supported is 32 PB per system.

All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.

A process is provided to back up the system configuration data onto disk so that it can be
restored if there is a disaster. This method does not back up application data. Only SVC
system configuration information is backed up.

For the purposes of remote data mirroring, two or more systems must form a partnership
before relationships between mirrored volumes are created.

For more information about the maximum configurations that apply to the system, I/O Group,
and nodes, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004924

2.4.4 Stretched system


We describe the two possible implementations of a stretched system.

Chapter 2. IBM SAN Volume Controller 17


Stretched systems
A stretched system is an extended high availability (HA) method that is supported by the SVC
to enable I/O operations to continue after the loss of half of the system. A stretched system is
also sometimes referred to as a split system because one-half of the system and I/O Group is
usually in a geographically distant location from the other, often 10 kilometers (6.2 miles) or
more. The maximum distance is approximately 300 km (186.4 miles). It depends on the
round-trip time, which must not be greater than 80 ms. With version 7.4, this round-trip time is
enhanced to 250 ms. A third site is required to host an FC storage system that provides a
quorum disk. This storage system can also be used for other purposes than to act only as a
quorum disk.

Enhanced stretched systems


SVC supports an enhanced stretched system configuration. This configuration can be used
regardless of whether the stretched system is configured with or without inter-switch links
(ISLs) between nodes.

Enhanced stretched systems provide the following primary benefits:


򐂰 In addition to the automatic failover that occurs when a site fails in a standard stretched
system configuration, an enhanced stretched system provides a manual override that can
be used to choose which of the two sites continues operation.
򐂰 Enhanced stretched systems intelligently route I/O traffic between nodes and controllers
to reduce the amount of I/O traffic between sites, and to minimize the impact to host
application I/O latency.
򐂰 Enhanced stretched systems include an implementation of additional policing rules to
ensure the correct configuration of a standard stretched system.

Note: The site attribute in the node and controller object needs to be set in an enhanced
stretched system.

For more information, see Appendix C, “SAN Volume Controller stretched cluster” on
page 903.

2.4.5 MDisks
The SVC system and its I/O Groups view the storage that is presented to the SAN by the
back-end controllers as a number of disks or LUNs, which are known as managed disks or
MDisks. Because the SVC does not attempt to provide recovery from physical disk failures
within the back-end controllers, an MDisk often is provisioned from a RAID array. However,
the application servers do not see the MDisks at all. Instead, they see a number of logical
disks, which are known as virtual disks or volumes, which are presented by the SVC I/O
Groups through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.

The MDisks are placed into storage pools where they are divided into a number of extents,
which are 16 MB - 8192 MB, as defined by the SVC administrator.

For more information about the total storage capacity that is manageable per system
regarding the selection of extents, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004924#_Extents

A volume is host-accessible storage that was provisioned out of one storage pool; or, if it is a
mirrored volume, out of two storage pools.

18 Implementing the IBM System Storage SAN Volume Controller V7.4


The maximum size of an MDisk is 1 PB. An SVC system supports up to 4096 MDisks
(including internal RAID arrays). At any point, an MDisk is in one of the following three modes:
򐂰 Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata that is stored
on it. The SVC does not write to an MDisk that is in unmanaged mode, except when it
attempts to change the mode of the MDisk to one of the other modes. The SVC can see
the resource, but the resource is not assigned to a storage pool.
򐂰 Managed MDisk
Managed mode MDisks are always members of a storage pool, and they contribute
extents to the storage pool. Volumes (if not operated in image mode) are created from
these extents. MDisks that are operating in managed mode might have metadata extents
that are allocated from them and can be used as quorum disks. This mode is the most
common and normal mode for an MDisk.
򐂰 Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume by
using virtualization. This mode is provided to satisfy the following major usage scenarios:
– Image mode allows the virtualization of MDisks that already contain data that was
written directly and not through an SVC; rather, it was created by a direct-connected
host.
This mode allows a client to insert the SVC into the data path of an existing storage
volume or LUN with minimal downtime. For more information about the data migration
process, see Chapter 6, “Data migration” on page 241.
Image mode allows a volume that is managed by the SVC to be used with the native
copy services function that is provided by the underlying RAID controller. To avoid the
loss of data integrity when the SVC is used in this way, it is important that you disable
the SVC cache for the volume.
– The SVC provides the ability to migrate to image mode, which allows the SVC to export
volumes and access them directly from a host without the SVC in the path.
Each MDisk that is presented from an external disk controller has an online path count
that is the number of nodes that has access to that MDisk. The maximum count is the
maximum number of paths that is detected at any point by the system. The current count
is what the system sees at this point. A current value that is less than the maximum can
indicate that SAN fabric paths were lost. For more information, see 2.5.1, “Image mode
volumes” on page 26.
SSDs that are in SVC 2145-CG8 or Flash space, which are presented by the external
Flash Enclosures of the SVC 2145-DH8 nodes, are presented to the cluster as MDisks. To
determine whether the selected MDisk is an SSD/Flash, click the link on the MDisk name
to display the Viewing MDisk Details panel.
If the selected MDisk is an SSD/Flash that is on an SVC, the Viewing MDisk Details panel
displays values for the Node ID, Node Name, and Node Location attributes. Alternatively,
you can select Work with Managed Disks → Disk Controller Systems from the
portfolio. On the Viewing Disk Controller panel, you can match the MDisk to the disk
controller system that has the corresponding values for those attributes.

Chapter 2. IBM SAN Volume Controller 19


2.4.6 Quorum disk
A quorum disk is a managed disk (MDisk) that contains a reserved area for use exclusively by
the system. The system uses quorum disks to break a tie when exactly half the nodes in the
system remain after a SAN failure. This situation is referred to as split brain. Quorum
functionality is not supported on Flash Drives within SVC nodes.

Three candidate quorum disks exist. However, only one quorum disk is active at any time. For
more information about quorum disks, see 2.8.1, “Quorum disks” on page 45.

2.4.7 Disk tier


It is likely that the MDisks (LUNs) that are presented to the SVC system have various
performance attributes because of the type of disk or RAID array on which they are placed.
The MDisks can be on 15 K disk revolutions per minute (RPMs) Fibre Channel or SAS disk,
Nearline SAS, or Serial Advanced Technology Attachment (SATA), SSDs, or even on Flash
drives.

Therefore, a storage tier attribute is assigned to each MDisk, with the default being
generic_hdd. Starting with the SVC V6.1, a new tier 0 (zero) level disk attribute is available for
Flash, and it is known as generic_ssd.

2.4.8 Storage pool


A storage pool is a collection of up to 128 MDisks that provides the pool of storage from which
volumes are provisioned. A single system can manage up to 128 storage pools. The size of
these pools can be changed (expanded or shrunk) at run time by adding or removing MDisks,
without taking the storage pool or the volumes offline. Expanding a storage pool with a single
drive is not possible.

At any point, an MDisk can be a member in one storage pool only, except for image mode
volumes. For more information, see 2.5.1, “Image mode volumes” on page 26.

Figure 2-3 on page 21 shows the relationships of the SVC entities to each other.

20 Implementing the IBM System Storage SAN Volume Controller V7.4


Vol1 Vol2 Vol3 Vol5 Vol4 Vol6 Vol7 Vol8

IOGRP_0 IOGRP_1 IOGRP_2 IOGRP_3


Node_2 Node_4 Node_6 Node_8
Node_1 Node_3 Node_5 Node_7

SVC Clustered Vol1 Vol2


Vol7 Vol8
Vol3 Vol4 Vol5 Vol6
Systems

Pool_SSDN7 Pool_SSDN8
Storage_Pool_01
Storage_Pool_02
SSD SSD SSD SSD
MD1 MD2 MD3 MD4 MD5 SSD SSD SSD SSD

DISK Subsystem DISK Subsystem


A C

Figure 2-3 Overview of SVC clustered system with I/O Group

Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator when the storage pool is created and cannot be changed later.
The size of the extent is 16 MB - 8192 MB.

It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring (2.5.4, “Mirrored
volumes” on page 29) to copy volumes between pools.

The SVC limits the number of extents in a system to 222 = ~4 million. Because the number of
addressable extents is limited, the total capacity of an SVC system depends on the extent
size that is chosen by the SVC administrator. The capacity numbers that are specified in
Table 2-2 on page 22 for an SVC system assume that all defined storage pools were created
with the same extent size.

Chapter 2. IBM SAN Volume Controller 21


Table 2-2 Extent size-to-addressability matrix
Extent Maximum Maximum Maximum MDisk Total storage
size non-thin-provisioned thin-provisioned capacity in GB capacity that
(MB) volume capacity in GB volume capacity in GB is
manageable
per systema

16 2,048 (2 TB) 2,000 2,048 (2 TB) 64 TB

32 4,096 (4 TB) 4,000 4,096 (4 TB) 128 TB

64 8,192 (8 TB) 8,000 8,192 (8 TB) 256 TB

128 16,384 (16 TB) 16,000 16,384 (16 TB) 512 TB

256 32,768 (32 TB) 32,000 32,768 (32 TB) 1 PB

512 65,536 (64 TB) 65,000 65,536 (64 TB) 2 PB

1,024 131,072 (128 TB) 130,000 131,072 (128 TB) 4 PB

2,048 262,144 (256 TB) 260,000 262,144 (256 TB) 8 PB

4,096 262,144 (256 TB) 262,144 524,288 (512 TB) 16 PB

8,192 262,144 (256 TB) 262,144 1,048,576 32 PB


(1024 TB)

a
The total capacity values assumes that all of the storage pools in the system use the same
extent size.

For most systems, a capacity of 1 - 2 PB is sufficient. A preferred practice is to use 256 MB for
larger clustered systems. The default extent size is 1,024 MB.

Single-tiered storage pool


MDisks that are used in a single-tiered storage pool must have the following characteristics to
avoid causing performance problems and other issues:
򐂰 They have the same hardware characteristics, for example, the same RAID type, RAID
array size, disk type, and RPMs.
򐂰 The disk subsystems that are providing the MDisks must have similar characteristics, for
example, maximum I/O operations per second (IOPS), response time, cache, and
throughput.
򐂰 The MDisks that are used are the same size; therefore, the MDisks provide the same
number of extents. If that is not feasible, check the distribution of the volumes’ extents in
that storage pool.

For more information, see IBM System Storage SAN Volume Controller and Storwize V7000
Best Practices and Performance Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Multitiered storage pool


A multitiered storage pool has a mix of MDisks with more than one type of disk tier attribute,
for example, a storage pool that contains a mix of generic_hdd and generic_ssd MDisks.

Therefore, a multitiered storage pool contains MDisks with various characteristics, as


opposed to a single-tier storage pool. However, it is a preferred practice for each tier to have
MDisks of the same size and MDisks that provide the same number of extents.

22 Implementing the IBM System Storage SAN Volume Controller V7.4


Multitiered storage pools are used to enable the automatic migration of extents between disk
tiers by using the IBM SVC Easy Tier function. For more information about these storage
pools, see Chapter 7, “Advanced features for storage efficiency” on page 361.

2.4.9 Volumes
Volumes are logical disks that are presented to the host or application servers by the SVC.
The hosts cannot see the MDisks; they can see only the logical volumes that are created from
combining extents from a storage pool.

The following types of volumes are available:


򐂰 Striped: A volume that is created in striped mode has extents that are allocated from each
MDisk in the storage pool in a round-robin fashion.
򐂰 Sequential: With a sequential mode volume, extents are allocated sequentially from an
MDisk.
򐂰 Image: Image mode is a one-to-one mapped extent mode volume.

The striped mode is the best method to use for most cases. However, sequential extent
allocation mode can slightly increase the sequential performance for certain workloads.

Figure 2-4 shows the striped volume mode and sequential volume mode. How the extent
allocation from the storage pool differs also is shown.

Figure 2-4 Storage pool extents overview

You can allocate the extents for a volume in many ways. The process is under full user control
when a volume is created and the allocation can be changed at any time by migrating single
extents of a volume to another MDisk within the storage pool.

For more information about how to create volumes and migrate extents by using the GUI or
command-line interface (CLI), see Chapter 6, “Data migration” on page 241, Chapter 9, “SAN
Volume Controller operations using the command-line interface” on page 493, and
Chapter 10, “SAN Volume Controller operations using the GUI” on page 655.

Chapter 2. IBM SAN Volume Controller 23


2.4.10 Easy Tier performance function
IBM Easy Tier is a performance function that automatically migrates or moves extents off a
volume to or from one MDisk storage tier to another MDisk storage tier. Since version 7.3, we
have a three-tier implementation. We support three kinds of tier attributes. Easy Tier monitors
the host I/O activity and latency on the extents of all volumes with the Easy Tier function that
is turned on in a multitier storage pool over a 24-hour period.

Next, it creates an extent migration plan that is based on this activity and then dynamically
moves high-activity or hot extents to a higher disk tier within the storage pool. It also moves
extents whose activity dropped off or cooled from the high-tier MDisks back to a lower-tiered
MDisk.

Easy Tier: The Easy Tier function can be turned on or off at the storage pool level and
volume level.

The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes.
The usage statistics file can be offloaded from the SVC nodes. Then, you can use IBM
Storage Advisor Tool (STAT) to create a summary report.

2.4.11 Evaluation mode


To experience the potential benefits of using Easy Tier in your environment before installing
Flash Drives, you can turn on the Easy Tier function for a single-level storage pool. Next, turn
on the Easy Tier function for the volumes within that pool. Easy Tier then starts monitoring
activity on the volume extents in the pool.

Easy Tier creates a migration report every 24 hours on the number of extents that can be
moved if the pool were a multitiered storage pool. Therefore, although Easy Tier extent
migration is not possible within a single-tier pool, the Easy Tier statistical measurement
function is available.

The usage statistics file can be offloaded from the SVC configuration node by using the GUI
(click Settings → Support). Then, you can use IBM Storage Advisor Tool (STAT) to create
the statistics report. A web browser is used to view the STAT output. For more information
about the STAT utility, see following web page:
https://ibm.biz/BdEzve

For more information about Easy Tier functionality and generating statistics by using IBM
STAT, see Chapter 7, “Advanced features for storage efficiency” on page 361.

2.4.12 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the SVC is a collection of host bus adapter (HBA) worldwide port names
(WWPNs) or iSCSI-qualified names (IQNs) that are defined on the specific server.

Note: iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are
generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume
that is accessed by multiple hosts of a server system.

24 Implementing the IBM System Storage SAN Volume Controller V7.4


iSCSI is an alternative means of attaching hosts. However, all communication with back-end
storage subsystems (and with other SVC systems) is still through FC.

Node failover can be handled without having a multipath driver that is installed on the iSCSI
server. An iSCSI-attached server can reconnect after a node failover to the original target
IP address, which is now presented by the partner node. To protect the server against link
failures in the network or HBA failures, the use of a multipath driver is mandatory.

Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that
are configured on the host object.

For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination)
adapter. Host objects can have IQNs and WWPNs.

2.4.13 Maximum supported configurations


To see more information about the maximum configurations that are applicable to the system,
I/O Group, and nodes, select Restrictions in the section of the following SVC support site
that corresponds to your SVC code level:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004924

Certain configuration limits exist in the SVC, including the following list of important limits. For
the most current information, see the SVC support site.
򐂰 Sixteen worldwide node names (WWNNs) per storage subsystem
򐂰 One PB MDisk
򐂰 8192 MB extents
򐂰 Long object names can be up to 63 characters

For more information about these features, see 2.12, “What is new with the SAN Volume
Controller 7.4” on page 69.

2.5 Volume overview


The maximum size of a single volume is 256 TB. A single fully populated (eight-node) SVC
system supports up to 8,192 volumes.

Volumes have the following characteristics or attributes:


򐂰 Volumes can be created and deleted.
򐂰 Volumes can be resized (expanded or shrunk).
򐂰 Volume extents can be migrated at run time to another MDisk or storage pool.
򐂰 Volumes can be created as fully allocated or thin-provisioned. A conversion from a fully
allocated to a thin-provisioned volume and vice versa can be done at run time.
򐂰 Volumes can be stored in multiple storage pools (mirrored) to make them resistant to disk
subsystem failures or to improve the read performance.
򐂰 Volumes can be mirrored synchronously or asynchronously for longer distances. An SVC
system can run active volume mirrors to a maximum of three other SVC systems, but not
from the same volume.

Chapter 2. IBM SAN Volume Controller 25


򐂰 Volumes can be copied by using FlashCopy. Multiple snapshots and quick restore from
snapshots (reverse FlashCopy) are supported.
򐂰 Volumes can be compressed.

Volumes have two major modes: managed mode and image mode. Managed mode volumes
have two policies: the sequential policy and the striped policy. Policies define how the extents
of a volume are allocated from a storage pool.

2.5.1 Image mode volumes


Image mode volumes are used to migrate LUNs that were previously mapped directly to host
servers over to the control of the SVC.

Image mode provides a one-to-one mapping between the logical block addresses (LBAs)
between a volume and an MDisk. Image mode volumes have a minimum size of one block
(512 bytes) and always occupy at least one extent.

An image mode MDisk is mapped to one, and only one, image mode volume.

The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of creating the image mode volume.

The SVC also supports the reverse process in which a managed mode volume can be
migrated to an image mode volume. If a volume is migrated to another MDisk, it is
represented as being in managed mode during the migration and is only represented as an
image mode volume after it reaches the state where it is a straight-through mapping.

An image mode MDisk is associated with exactly one volume. The last extent is partial (not
filled) if the (image mode) MDisk is not a multiple of the MDisk Group’s extent size. An image
mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and
it does not have any SVC metadata extents that are assigned to it. Managed or image mode
MDisks are always members of a storage pool.

It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). The extent size that is chosen for
this specific storage pool must be the same as the extent size into which you plan to migrate
the data. All of the SVC copy services functions can be applied to image mode disks. See
Figure 2-5 on page 27.

26 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 2-5 Image mode volume versus striped volume

2.5.2 Managed mode volumes


Volumes operating in managed mode provide a full set of virtualization functions. Within a
storage pool, the SVC supports an arbitrary relationship between extents on (managed
mode) volumes and extents on MDisks. Each volume extent maps to exactly one MDisk
extent.

Figure 2-6 on page 28 shows this mapping. It also shows a volume that consists of several
extents that are shown as V0 - V7. Each of these extents is mapped to an extent on one of the
MDisks: A, B, or C. The mapping table stores the details of this indirection.

In Figure 2-6 on page 28, several of the MDisk extents are unused. No volume extent maps to
them. These unused extents are available for use in creating volumes, migration, expansion,
and so on.

Chapter 2. IBM SAN Volume Controller 27


Figure 2-6 Simple view of block virtualization

The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm: if the set of MDisks from which to allocate extents contains more than
one MDisk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no
free extents when its turn arrives, its turn is missed and the round-robin moves to the next
MDisk in the set that has a free extent.

When a volume is created, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by choosing the next disk in a round-robin fashion. The
pseudo-random algorithm avoids the situation where the “striping effect” that is inherent in a
round-robin algorithm places the first extent for many volumes on the same MDisk. Placing
the first extent of a number of volumes on the same MDisk can lead to poor performance for
workloads that place a large I/O load on the first extent of each volume, or that create multiple
sequential streams.

2.5.3 Cache mode and cache-disabled volumes


Under normal conditions, a volume’s read and write data is held in the cache of its preferred
node, with a mirrored copy of write data that is held in the partner node of the same I/O
Group. However, it is possible to create a volume with cache disabled, which means that the
I/Os are passed directly through to the back-end storage controller rather than being held in
the node’s cache.

Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (LUNs) that are used as SVC image mode
volumes. Using SVC Copy Services instead of the underlying disk controller copy services
gives better results.

28 Implementing the IBM System Storage SAN Volume Controller V7.4


2.5.4 Mirrored volumes
The mirrored volume feature provides a simple RAID 1 function; therefore, a volume has two
physical copies of its data. This approach allows the volume to remain online and accessible
even if one of the MDisks sustains a failure that causes it to become inaccessible.

The two copies of the volume often are allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships;
it is serviced by an I/O Group; and it has a preferred node.

Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be 0 or 1.

This feature provides a point-in-time copy functionality that is achieved by “splitting” a copy
from the volume. However, the mirrored volume feature does not address other forms of
mirroring that are based on remote copy, which is sometimes called IBM HyperSwap®, that
mirrors volumes across I/O Groups or clustered systems. It is also not intended to manage
mirroring or remote copy functions in back-end controllers.

Figure 2-7 provides an overview of volume mirroring.

Figure 2-7 Volume mirroring overview

A second copy can be added to a volume with a single copy or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”.

The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default “synchronization rate” or at a rate that is defined when the
volume is created or modified. The synchronization status for mirrored volumes is recorded
on the quorum disk.

Chapter 2. IBM SAN Volume Controller 29


If a two-copy mirrored volume is created with the format parameter, both copies are formatted
in parallel and the volume comes online when both operations are complete with the copies in
sync.

If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.

If it is known that MDisk space (which is used for creating copies) is already formatted or if the
user does not require read stability, a “no synchronization” option can be selected that
declares the copies as “synchronized” (even when they are not).

To minimize the time that is required to resynchronize a copy that is out of sync, only the
256 KB grains that were written to since the synchronization was lost are copied. This
approach is known as an incremental synchronization. Only the changed grains must be
copied to restore synchronization.

Important: An unmirrored volume can be migrated from one location to another by adding
a second copy to the wanted destination, waiting for the two copies to synchronize, and
then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.

Where there are two copies of a volume, one copy is known as the primary copy. If the
primary is available and synchronized, reads from the volume are directed to it. The user can
select the primary when the volume is created or can change it later.

Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.

Write I/O operations data flow with a mirrored volume


For write I/O operations to a mirrored volume, the SVC preferred node definition, with the
multipathing driver on the host, is used to determine the preferred path. The host routes the
I/Os through the preferred path, and the corresponding node is responsible for further
destaging written data from cache to both volume copies. Figure 2-8 shows the data flow for
write I/O processing when volume mirroring is used.

Figure 2-8 Data flow for write I/O processing in a mirrored volume in the SVC

As shown in Figure 2-8, all the writes are sent by the host to the preferred node for each
volume (1); then, the data is mirrored to the cache of the partner node in the I/O Group (2),
and acknowledgment of the write operation is sent to the host (3). The preferred node then
destaged the written data to the two volume copies (4).

30 Implementing the IBM System Storage SAN Volume Controller V7.4


With version 7.3, the cache architecture changed from an upper-cache design to a two-layer
cache design. With this change, the data is only written once and is then directly destaged
from the controller to the locally attached disk system. Figure 2-9 shows the data flow in a
stretched environment.

Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA

Reply with location


Data is replicated
Mirror once across ISL.
Copy 1 Copy 2
Preferred Non preferred Copy 1 Non preferred
Copy 2
LCA1 LCA1 Preferred
LCA 2 LCA 2
Destage Token write data message with location Destage

Storage at Site 1 Storage at Site 2


Figure 2-9 Design of an enhanced stretched cluster

For more information about the change, see Chapter 6 of IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.

A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while it is reading from one copy, it is repaired by
using data from the other copy. This consistency check is performed asynchronously with
host I/O.

Important: Mirrored volumes can be taken offline if there is no quorum disk available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.

Mirrored volumes use bitmap space at a rate of 1 bit per 256 KB grain, which translates to
1 MB of bitmap space supporting 2 TB of mirrored volumes. The default allocation of bitmap
space is 20 MB, which supports 40 TB of mirrored volumes. If all 512 MB of variable bitmap
space is allocated to mirrored volumes, 1 PB of mirrored volumes can be supported.

2.5.5 Thin-provisioned volumes


Volumes can be configured to be thin-provisioned or fully allocated. A thin-provisioned
volume behaves as though application reads and writes were fully allocated. When a
thin-provisioned volume is created, the user specifies two capacities: the real physical
capacity that is allocated to the volume from the storage pool, and its virtual capacity that is
available to the host. In a fully allocated volume, these two values are the same.

Chapter 2. IBM SAN Volume Controller 31


Therefore, the real capacity determines the quantity of MDisk extents that is initially allocated
to the volume. The virtual capacity is the capacity of the volume that is reported to all other
SVC components (for example, FlashCopy, cache, and remote copy) and to the host servers.

The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.

Thin-provisioned volumes can be used as volumes that are assigned to the host, by
FlashCopy to implement thin-provisioned FlashCopy targets, and with the mirrored volumes
feature.

When a thin-provisioned volume is initially created, a small amount of the real capacity is
used for initial metadata. I/Os are written to grains of the thin volume that were not previously
written, which causes grains of the real capacity to be used to store metadata and the actual
user data. I/Os are written to grains that were previously written, which updates the grain
where data was previously written.

The grain size is defined when the volume is created. The grain size can be 32 KB, 64 KB,
128 KB, or 256 KB. The default grain size is 256 KB, which is the recommended option. If you
select 32 KB for the grain size, the volume size cannot exceed 260,000 GB. The grain size
cannot be changed after the thin-provisioned volume is created. Generally, smaller grain sizes
save space, but they require more metadata access, which can adversely affect performance.
If you do not use the thin-provisioned volume as a FlashCopy source or target volume, use
256 KB to maximize performance. If you use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the FlashCopy
function.

Figure 2-10 shows the thin-provisioning concept.

Figure 2-10 Conceptual diagram of thin-provisioned volume

32 Implementing the IBM System Storage SAN Volume Controller V7.4


Thin-provisioned volumes store user data and metadata. Each grain of data requires
metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned
volumes are less than the I/O rates that are obtained from fully allocated volumes.

The metadata storage overhead is never greater than 0.1% of the user data. The overhead is
independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in
a FlashCopy map, use the same grain size as the map grain size for the best performance. If
you are using the thin-provisioned volume directly with a host system, use a small grain size.

Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A


read I/O, which requests data from deallocated data space, returns zeros. When a write I/O
causes space to be allocated, the grain is zeroed before use. However, if the node is a
2145-DH8, space is not allocated for a host write that contains all zeros. The formatting
flag is ignored when a thin volume is created or when the real capacity is expanded; the
virtualization component never formats the real capacity of a thin-provisioned volume.

The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity allows a larger amount of data and metadata to be stored on the
volume. Thin-provisioned volumes use the real capacity that is provided in ascending order as
new data is written to the volume. If the user initially assigns too much real capacity to the
volume, the real capacity can be reduced to free storage for other uses.

A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to
automatically add a fixed amount of more real capacity to the thin volume as required.
Therefore, autoexpand attempts to maintain a fixed amount of unused real capacity for the
volume, which is known as the contingency capacity.

The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.

A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and it must expand.

Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated.

To support the auto expansion of thin-provisioned volumes, the storage pools from which they
are allocated have a configurable capacity warning. When the used capacity of the pool
exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% is
specified, the event is logged when 20% of the free capacity remains.

A thin-provisioned volume can be converted nondisruptively to a fully allocated volume (or


vice versa) by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated
copy from the volume after they are synchronized.

The fully allocated-to-thin-provisioned migration procedure uses a zero-detection algorithm


so that grains that contain all zeros do not cause any real capacity to be used.

Chapter 2. IBM SAN Volume Controller 33


2.5.6 Volume I/O governing
I/O operations can be constrained so that the maximum amount of I/O activity that a host can
perform on a volume can be limited over a specific period.

This governing feature can be used to satisfy a quality of service (QoS) requirement or a
contractual obligation (for example, if a client agrees to pay for I/Os performed, but does not
pay for I/Os beyond a certain rate). Only Read, Write, and Verify commands that access the
physical medium are subject to I/O governing.

The governing rate can be set in I/Os per second or MB per second. It can be altered by
changing the throttle value by running the chvdisk command and specifying the -rate
parameter.

I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does
not affect the data copy rate from the primary volume. Governing has no effect on
FlashCopy or data migration I/O rates.

An I/O budget is expressed as a number of I/Os (or MBs) over a minute. The budget is evenly
divided among all SVC nodes that service that volume, which means among the nodes that
form the I/O Group of which that volume is a member.

The algorithm operates two levels of policing. While a volume on each SVC node receives I/O
at a rate lower than the governed level, no governing is performed. However, when the I/O
rate exceeds the defined threshold, the policy is adjusted. A check is made every minute to
see that each node is receiving I/O below the threshold level. Whenever this check shows that
the host exceeded its limit on one or more nodes, policing begins for new I/Os.

The following conditions exist while policing is in force:


򐂰 A budget allowance is calculated for a 1-second period.
򐂰 I/Os are counted over a period of a second.
򐂰 If I/Os are received in excess of the 1-second budget on any node in the I/O Group, those
I/Os and later I/Os are pended.
򐂰 When the second expires, a new budget is established, and any pended I/Os are redriven
under the new budget.

This algorithm might cause I/O to backlog in the front end, which might eventually cause a
Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a
host stays within its 1-second budget on all nodes in the I/O Group for 1 minute, the policing is
relaxed and monitoring takes place over the 1-minute period as before.

2.6 iSCSI overview


iSCSI is an alternative means of attaching hosts to the SVC. All communications with
back-end storage subsystems and with other SVC systems occur through FC only.

The iSCSI function is a software function that is provided by the SVC code, not hardware.

34 Implementing the IBM System Storage SAN Volume Controller V7.4


In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network, which is based on IP routers and Ethernet switches. iSCSI is a block-level protocol
that encapsulates SCSI commands into TCP/IP packets; therefore, it uses an existing IP
network instead of requiring expensive FC HBAs and a SAN fabric infrastructure.

A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) starts read or write requests for data from a target server (for example, a data
storage system). Commands, which are sent by the client and processed by the server, are
put into the Command Descriptor Block (CDB). The server runs a command and completion
is indicated by a special signal alert.

The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.

The following concepts of names and addresses are carefully separated in iSCSI:
򐂰 An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
initiator name and target name also refer to an iSCSI name.
򐂰 An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of
that node. The address consists of a host name or IP address, a TCP port number (for the
target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.

Each iSCSI node, that is, an initiator or target, has a unique IQN, which be can up to
255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes.

The iSCSI qualified name format is defined in RFC3720 and contains (in order) the following
elements:
򐂰 The string iqn
򐂰 A date code that specifies the year and month in which the organization registered the
domain or subdomain name that is used as the naming authority string
򐂰 The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name
򐂰 Optional: A colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique

For the SVC, the IQN for its iSCSI target is specified as shown in the following example:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

On a Microsoft Windows server, the IQN (that is, the name for the iSCSI initiator), can be
defined as shown in the following example:
iqn.1991-05.com.microsoft:<computer name>

The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An
alias can be assigned to an initiator or a target. The alias is independent of the name and
does not have to be unique. Because it is not unique, the alias must be used in a purely
informational way. It cannot be used to specify a target at login or during authentication.
Targets and initiators can have aliases.

Chapter 2. IBM SAN Volume Controller 35


An iSCSI name provides the correct identification of an iSCSI device irrespective of its
physical location. The IQN is an identifier, not an address.

Caution: Before you change system or node names for an SVC system that has servers
that are connected to it by way of iSCSI, be aware that because the system and node
name are part of the SVC’s IQN, you can lose access to your data by changing these
names. The SVC GUI displays a warning, but the CLI does not display a warning.

The iSCSI session, which consists of a login phase and a full feature phase, is completed with
a special command.

The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.

If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.

When the login is confirmed, the iSCSI session enters the full feature phase. If more than one
TCP connection was established, iSCSI requires that each command and response pair must
go through one TCP connection. Therefore, each separate read or write command is carried
out without the necessity to trace each request for passing separate flows. However, separate
transactions can be delivered through separate TCP connections within one session.

Figure 2-11 shows an overview of the various block-level storage protocols and the position of
the iSCSI layer.

Figure 2-11 Overview of block-level protocol stacks

2.6.1 Use of IP addresses and Ethernet ports


The SVC node hardware has three Ethernet ports. The configuration details of the three
Ethernet ports can be displayed by the GUI or CLI.

36 Implementing the IBM System Storage SAN Volume Controller V7.4


The following types of IP addresses are available:
򐂰 System management IP address
This address is used for access to the SVC CLI, SVC GUI, and to the Common
Information Model Object Manager (CIMOM) that runs on the SVC configuration node.
Only the configuration node presents a system management IP address at any one time.
Two system management IP addresses, one for each of the two Ethernet ports, are
available. Configuration node failover is also supported.
򐂰 Port IP address
This address is used to perform iSCSI I/O to the system. Each node can have a port
IP address for each of its ports.

SVC nodes have up to six Ethernet ports. These ports are for 1 Gbps support or with the
optional Ethernet Card10 Gbps support. System management is possible only over the 1
Gbps ports.

Figure 2-12 shows an overview of the IP addresses on an SVC node port and how these IP
addresses are moved between the nodes of an I/O Group.

The management IP addresses and the iSCSI target IP addresses fail over to the partner
node N2 if node N1 fails (and vice versa). The iSCSI target IPs fail back to their corresponding
ports on node N1 when node N1 is running again.

Figure 2-12 SVC IP address overview

It is a preferred practice to keep all of the eth0 ports on all of the nodes in the system on the
same subnet. The same practice applies for the eth1 ports; however, it can be a separate
subnet to the eth0 ports.

You can configure a maximum of 256 iSCSI hosts per I/O Group per SVC because of IQN
limits.

Chapter 2. IBM SAN Volume Controller 37


2.6.2 iSCSI volume discovery
The iSCSI target implementation on the SVC nodes uses the hardware offload features that
are provided by the node’s hardware. This implementation results in a minimal effect on the
node’s CPU load for handling iSCSI traffic, and simultaneously delivers excellent throughput
(up to 95 MBps user data) on each of the three LAN ports. The use of jumbo frames, which
are maximum transmission unit (MTU) sizes greater than 1,500 bytes, is a preferred practice.

Hosts can discover volumes through one of the following mechanisms:


򐂰 Internet Storage Name Service (iSNS)
The SVC can register with an iSNS name server; you set the IP address of this server by
using the svctask chcluster command. A host can then query the iSNS server for
available iSCSI targets.
򐂰 Service Location Protocol (SLP)
The SVC node runs an SLP daemon, which responds to host requests. This daemon
reports the available services on the node, such as the CIMOM service that runs on the
configuration node. The iSCSI I/O service can now also be reported.
򐂰 iSCSI Send Target request
The host can send a Send Target request by using the iSCSI protocol to the iSCSI TCP/IP
port (port 3260).

2.6.3 iSCSI authentication


Authentication of the host server from the SVC system is optional and disabled, by default.
The user can choose to enable Challenge Handshake Authentication Protocol (CHAP)
authentication, which involves sharing a CHAP secret between the SVC system and the host.
The SVC as authenticator sends a challenge message to the specific server (peer). The
server responds with a value that is checked by the SVC. If there is a match, the SVC
acknowledges the authentication. If not, the SVC ends the connection and does not allow any
I/O to volumes.

A CHAP secret can be assigned to each SVC host object. The host must then use CHAP
authentication to begin a communications session with a node in the system. A CHAP secret
can also be assigned to the system.

Volumes are mapped to hosts, and LUN masking is applied by using the same methods that
are used for FC LUNs.

Because iSCSI can be used in networks where data security is a concern, the specification
allows for separate security methods. For example, you can set up security through a method,
such as IPSec, which is not apparent for higher levels, such as iSCSI, because it is
implemented at the IP level. For more information about securing iSCSI, see Securing Block
Storage Protocols over IP, RFC3723, which is available at this website:
http://tools.ietf.org/html/rfc3723

2.6.4 iSCSI multipathing


Multipathing drivers means that the host can send commands down multiple paths to the
SVC to the same volume. A fundamental multipathing difference exists between FC and
iSCSI environments.

38 Implementing the IBM System Storage SAN Volume Controller V7.4


If FC-attached hosts see their FC target and volumes go offline, for example, because of a
problem in the target node, its ports, or the network, the host must use a separate SAN path
to continue I/O. Therefore, a multipathing driver is always required on the host.

SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the
key difference) the host is reconnected to the same IP target that reappears after a short
period and its volumes continue to be available for I/O. iSCSI allows failover without host
multipathing. To achieve this failover without host multipathing, the partner node in the I/O
Group takes over the port IP addresses and iSCSI names of a failed node.

Be aware: With the iSCSI implementation in the SVC, an IP address failover/failback


between partner nodes of an I/O Group takes place only in cases of a planned or
unplanned node restart (node offline). When the partner node returns to online status, a
delay of 5 minutes happens before failback occurs for the IP addresses and iSCSI names.

A host multipathing driver for iSCSI is required if you want the following capabilities:
򐂰 Protecting a server from network link failures
򐂰 Protecting a server from network failures if the server is connected through two separate
networks
򐂰 Providing load balancing on the server’s network links

2.7 Advanced Copy Services overview


Advanced Copy Services is a class of functionality of storage arrays and storage devices that
allow various forms of block-level data duplication. By using Advanced Copy Services, you
can make mirror images of part or all of your data eventually between distant sites. This
function has the following benefits and uses:
򐂰 Facilitating disaster recovery
򐂰 Building reporting instances to offload billing activities from production databases
򐂰 Building quality assurance systems on regular intervals for regression testing
򐂰 Offloading offline backups from production systems
򐂰 Building test systems by using production data

The SVC supports the following copy services:


򐂰 Synchronous remote copy (Metro Mirror)
򐂰 Asynchronous remote copy (Global Mirror)
򐂰 Asynchronous remote copy with Change Volumes (Global Mirror)
򐂰 Point-in-Time copy (FlashCopy)
򐂰 Data migration (Image mode migration and volume mirroring migration)

Copy services functions are implemented within an SVC system (FlashCopy and image mode
migration) or between the SVC or SVC and Storwize systems (Metro Mirror and Global
Mirror). To use Metro Mirror and Global Mirror functions, you must have the remote copy
license installed on each side.

You can create partnerships with the SVC and Storwize systems to allow Metro Mirror and
Global Mirror to operate between the two systems. To create these partnerships, both
clustered systems must be at version 6.3.0 or later.

Chapter 2. IBM SAN Volume Controller 39


A clustered system is in one of two layers: the replication layer or the storage layer. The SVC
system is always in the replication layer. The Storwize system is in the storage layer by
default, but the system can be configured to be in the replication layer instead.

Figure 2-13 shows an example of the layers in an SVC and Storwize clustered-system
partnership.

Figure 2-13 Replication between SVC and Storwize systems

Within the SVC, both intracluster copy services functions (FlashCopy and image mode
migration) operate at the block level. Intercluster functions (Global Mirror and Metro Mirror)
operate at the volume layer. A volume is the container that is used to present storage to host
systems. Operating at this layer allows the Advanced Copy Services functions to benefit from
caching at the volume layer and helps facilitate the asynchronous functions of Global Mirror
and lessen the effect of synchronous Metro Mirror.

Operating at the volume layer also allows Advanced Copy Services functions to operate
above and independently of the function or characteristics of the underlying disk subsystems
that are used to provide storage resources to an SVC system. Therefore, if the physical
storage is virtualized with an SVC or Storwize and the backing array is supported by the SVC
or Storwize, you can use disparate backing storage.

FlashCopy: Although FlashCopy operates at the block level, this level is the block level of
the SVC, so the physical backing storage can be anything that the SVC supports. However,
performance is limited to the slowest performing storage that is involved in FlashCopy.

2.7.1 Synchronous and asynchronous remote copy


Global Mirror and Metro Mirror are implemented at the volume layer within the SVC. They are
collectively referred to as remote copy. In general, the purpose of both functions is to maintain
two copies of data. Often, the two copies are separated by distance, but not necessarily. The
remote copy can be maintained in one of two modes: synchronous or asynchronous.

40 Implementing the IBM System Storage SAN Volume Controller V7.4


Metro Mirror is the IBM branded term for synchronous remote copy function. Global Mirror
is the IBM branded term for the asynchronous remote copy function.

Synchronous remote copy ensures that updates are physically committed (not in volume
cache) in both the primary and the secondary SVC clustered systems before the application
considers the updates complete. Therefore, the secondary SVC clustered system is fully
up-to-date if it is needed in a failover.

However, the application is fully exposed to the latency and bandwidth limitations of the
communication link to the secondary system. In a truly remote situation, this extra latency can
have a significantly adverse effect on application performance; therefore, a limitation of 300
kilometers (~186 miles) exists on the distance of Metro Mirror. This distance induces latency
of approximately 5 microseconds per kilometer, which does not include the latency that is
added by the equipment in the path.

The nature of synchronous remote copy is that latency for the distance and the equipment in
the path is added directly to your application I/O response times. The overall latency for a
complete round trip must not exceed 80 milliseconds.

With version 7.4, the round-trip time was improved to 250 ms. Figure 2-14 shows a list of the
round-trip times.

Figure 2-14 Maximum round-trip times

Special configuration guidelines for SAN fabrics are used for data replication. The distance
and available bandwidth of the intersite links must be considered. For more information about
these guidelines, see the SVC Support Portal, which is available at this website:
https://ibm.biz/BdEzB5

For more information about the SVC’s synchronous mirroring, see Chapter 8, “Advanced
Copy Services” on page 405.

In asynchronous remote copy, the application is provided acknowledgment that the write is
complete before the write is committed (written to backing storage) at the secondary site.
Therefore, on a failover, certain updates (data) might be missing at the secondary site.

The application must have an external mechanism for recovering the missing updates or
recovering to a consistent point (which is usually a few minutes in the past). This mechanism
can involve user intervention, but in most practical scenarios, it must be at least partially
automated.

Chapter 2. IBM SAN Volume Controller 41


Recovery on the secondary site involves assigning the Global Mirror targets from the SVC
target system to one or more hosts (which depends on your disaster recovery design) and
making those volumes visible on the host and creating any required multipath device
definitions.

The application must then be started and a recovery procedure to either a consistent point in
time or recovery of the missing updates must be performed. For this reason, the initial state of
Global Mirror targets is called crash consistent. This term might sound daunting, but it merely
means that the data on the volumes appears to be in the same state as though an application
crash occurred.

In asynchronous remote copy with cycling mode (Change Volumes), changes are tracked and
copied to intermediate Change Volumes where needed. Changes are transmitted to the
secondary site periodically. The secondary volumes are much further behind the primary
volume, and more data must be recovered if there is a failover. Because the data transfer can
be smoothed over a longer time period, however, lower bandwidth is required to provide an
effective solution.

Because most applications, such as databases, have mechanisms for dealing with this type of
data state for a long time, it is a fairly mundane operation (depending on the application). After
this application recovery procedure is finished, the application starts normally.

RPO: When you are planning your Recovery Point Objective (RPO), you must account for
application recovery procedures, the length of time that they take, and the point to which
the recovery procedures can roll back data.

Although Global Mirror on an SVC can provide typically subsecond RPO times, the
effective RPO time can be up to 5 minutes or longer, depending on the application
behavior.

Most clients aim to automate the failover or recovery of the remote copy through failover
management software. The SVC provides Simple Network Management Protocol (SNMP)
traps and interfaces to enable this automation. IBM Support for automation is provided by IBM
Tivoli Storage Productivity Center.

The Tivoli documentation is available at the IBM Tivoli Storage Productivity Center
Knowledge Center at this website:
https://ibm.biz/BdEzdX

2.7.2 FlashCopy
FlashCopy is the IBM branded name for Point-in-Time, which is sometimes called Time-Zero,
or T0 copy. This function makes a copy of the blocks on a source volume and can duplicate
them on 1 - 256 target volumes.

FlashCopy: When the multiple target capability of FlashCopy is used, if any other copy (C)
is started while an existing copy is in progress (B), C has a dependency on B. Therefore, if
you end B, C becomes invalid.

FlashCopy works by creating one or two (for incremental operations) bitmaps to track
changes to the data on the source volume. This bitmap is also used to present an image of
the source data at the point that the copy was taken to target hosts while the actual data is
being copied. This capability ensures that copies appear to be instantaneous.

42 Implementing the IBM System Storage SAN Volume Controller V7.4


Bitmap: In this context, bitmap refers to a special programming data structure that is used
to compactly store Boolean values. Do not confuse this definition with the popular image
file format.

If your FlashCopy targets have existing content, the content is overwritten during the copy
operation. Also, the “no copy” (copy rate 0) option, in which only changed data is copied,
overwrites existing content. After the copy operation starts, the target volume appears to have
the contents of the source volume as it existed at the point that the copy was started.
Although the physical copy of the data takes an amount of time that varies based on system
activity and configuration, the resulting data at the target appears as though the copy was
made instantaneously.

FlashCopy permits the management operations to be coordinated through a grouping of


FlashCopy pairs so that a common single point in time is chosen for copying target volumes
from their respective source volumes. This capability allows a consistent copy of data for an
application that spans multiple volumes.

The SVC also permits source and target volumes for FlashCopy to be thin-provisioned
volumes. FlashCopies to or from thinly provisioned volumes allow the duplication of data
while using less space. These types of volumes depend on the rate of change of the data.
Typically, these types of volumes are used in situations where time is limited. Over time, they
might fill the physical space that they were allocated. Reverse FlashCopy enables target
volumes to become restore points for the source volume without breaking the FlashCopy
relationship and without having to wait for the original copy operation to complete. The SVC
supports multiple targets and therefore multiple rollback points.

In most practical scenarios, the FlashCopy functionality of the SVC is integrated into a
process or procedure that allows the benefits of the point-in-time copies to be used to
address business needs. IBM offers Tivoli Storage FlashCopy Manager for this functionality.
For more information about Tivoli Storage FlashCopy Manager, see this website:
http://www.ibm.com/software/products/en/tivoli-storage-flashcopy-manager

Most clients aim to integrate the FlashCopy feature for point-in-time copies and quick
recovery of their applications and databases.

2.7.3 Image mode migration and volume mirroring migration


Two methods of Advanced Copy Services are available outside of the licensed Advanced
Copy Services features: image mode migration and volume mirroring migration. The base
software functionality of the SVC includes both of these capabilities.

Image mode migration works by establishing a one-to-one static mapping of volumes and
MDisks. This mapping allows the data on the MDisk to be presented directly through the
volume layer and allows the data to be moved between volumes and the associated backing
MDisks. This function provides a facility to use the SVC as a migration tool. Otherwise, you
have no recourse, such as migrating from Vendor A hardware to Vendor B hardware,
assuming that the two systems have no other compatibility.

Volume mirroring migration is a clever use of the facility that the SVC offers to mirror data on
a volume between two sets of storage pools. As with the logical volume management portion
of certain operating systems, the SVC can mirror data transparently between two sets of
physical hardware. You can use this feature to move data between MDisk groups with no host
I/O interruption by removing the original copy after the mirroring is completed. This feature is
much more limited than FlashCopy and must not be used where FlashCopy is appropriate.

Chapter 2. IBM SAN Volume Controller 43


Instead, use this function as an infrequent-use, hardware-refresh aid, because you now can
move between your old storage system and new storage system without interruption.

Careful planning: When you are migrating by using the volume mirroring migration, your
I/O rate is limited to the slowest of the two MDisk groups that are involved. Therefore,
planning carefully to avoid affecting the live systems is imperative.

2.8 SAN Volume Controller clustered system overview


In simple terms, a clustered system or system is a collection of servers that together provide a
set of resources to a client. The key point is that the client has no knowledge of the underlying
physical hardware of the system. The client is isolated and protected from changes to the
physical hardware. This arrangement offers many benefits including, most significantly, high
availability.

Resources on the clustered system act as highly available versions of unclustered resources.
If a node (an individual computer) in the system is unavailable or too busy to respond to a
request for a resource, the request is passed transparently to another node that can process
the request. The clients are unaware of the exact locations of the resources that they use.

The SVC is a collection of up to eight nodes, which are added in pairs that are known as I/O
Groups. These nodes are managed as a set (system), and they present a single point of
control to the administrator for configuration and service activity.

The eight-node limit for an SVC system is a limitation that is imposed by the microcode and
not a limit of the underlying architecture. Larger system configurations might be available in
the future.

Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system
feature is not based on Linux clustering code. The clustered system software within the SVC,
that is, the event manager cluster framework, is based on the outcome of the COMPASS
research project. It is the key element that isolates the SVC application from the underlying
hardware nodes. The clustered system software makes the code portable. It provides the
means to keep the single instances of the SVC code that are running on separate systems’
nodes in sync. Therefore, restarting nodes during a code upgrade, adding new nodes, or
removing old nodes from a system or failing nodes cannot affect the SVC’s availability.

All active nodes of a system must know that they are members of the system, especially in
situations where it is key to have a solid mechanism to decide which nodes form the active
system, such as the split-brain scenario where single nodes lose contact with other nodes. A
worst case scenario is a system that splits into two separate systems.

Within an SVC system, the voting set and a quorum disk are responsible for the integrity of
the system. If nodes are added to a system, they are added to the voting set. If nodes are
removed, they are removed quickly from the voting set. Over time, the voting set and the
nodes in the system can completely change so that the system migrates onto a separate set
of nodes from the set on which it started.

The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the
system can continue to operate, it adjusts the quorum requirement so that further node failure
can be tolerated.

44 Implementing the IBM System Storage SAN Volume Controller V7.4


The lowest Node Unique ID in a system becomes the boss node for the group of nodes, and it
proceeds to determine (from the quorum rules) whether the nodes can operate as the
system. This node also presents the maximum two-cluster IP addresses on one or both of its
nodes’ Ethernet ports to allow access for system management.

2.8.1 Quorum disks


The system uses the quorum disk for two purposes: as a tiebreaker if there is a SAN fault
(when half of the nodes that were previously members of the system are present) and to hold
a copy of important system configuration data. Slightly over 256 MB is reserved for this
purpose on each quorum disk candidate. Only one active quorum disk exists in a system;
however, the system uses three MDisks as quorum disk candidates. The system
automatically selects the actual active quorum disk from the pool of assigned quorum disk
candidates.

If a tiebreaker condition occurs, the one-half portion of the system nodes, which can reserve
the quorum disk after the split occurred, locks the disk and continues to operate. The other
half stops its operation. This design prevents both sides from becoming inconsistent with
each other.

When MDisks are added to the SVC system, the SVC system checks the MDisk to see
whether it can be used as a quorum disk. If the MDisk fulfills the requirements, the SVC
assigns the first three MDisks that are added to the system as quorum candidates. One of
these MDisks is selected as the active quorum disk.

Quorum disk requirements: To be considered eligible as a quorum disk, a LUN must


meet the following criteria:
򐂰 It must be presented by a disk subsystem that is supported to provide SVC quorum
disks.
򐂰 It was manually allowed to be a quorum disk candidate by using the chcontroller
-allowquorum yes command.
򐂰 It must be in managed mode (no image mode disks).
򐂰 It must have sufficient free extents to hold the system state information and the stored
configuration metadata.
򐂰 It must be visible to all of the nodes in the system.

Quorum disk placement: If possible, the SVC places the quorum candidates on separate
disk subsystems. However, after the quorum disk is selected, no attempt is made to ensure
that the other quorum candidates are presented through separate disk subsystems.

Important: Quorum disk placement verification and adjustment to separate storage


systems (if possible) reduce the dependency from a single storage system and can
increase the quorum disk availability significantly.

You can list the quorum disk candidates and the active quorum disk in a system by using the
lsquorum command.

Chapter 2. IBM SAN Volume Controller 45


When the set of quorum disk candidates is chosen, it is fixed. However, a new quorum disk
candidate can be chosen in one of the following conditions:
򐂰 When the administrator requests that a specific MDisk becomes a quorum disk by using
the chquorum command
򐂰 When an MDisk that is a quorum disk is deleted from a storage pool
򐂰 When an MDisk that is a quorum disk changes to image mode

An offline MDisk is not replaced as a quorum disk candidate.

For disaster recovery purposes, a system must be regarded as a single entity, so the system
and the quorum disk must be colocated.

Special considerations are required for the placement of the active quorum disk for a
stretched or split cluster and split I/O Group configurations. For more information, see this
website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311

Important: Running an SVC system without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata prevents any migration
operation (including a forced MDisk delete).

Mirrored volumes can be taken offline if no quorum disk is available. This behavior occurs
because the synchronization status for mirrored volumes is recorded on the quorum disk.

During the normal operation of the system, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a
node fails for any reason, the workload that is intended for the node is taken over by another
node until the failed node is restarted and readmitted into the system (which happens
automatically).

If the microcode on a node becomes corrupted, which results in a failure, the workload is
transferred to another node. The code on the failed node is repaired, and the node is
readmitted into the system (which is an automatic process).

2.8.2 Split I/O Groups or a stretched cluster


An I/O Group is formed by a pair of SVC nodes. These nodes act as failover nodes for each
other, and hold mirrored copies of cached volume writes. For more information about I/O
Groups, see 2.4.2, “I/O Groups” on page 16. For more information about stretched cluster
configuration, see Appendix C, “SAN Volume Controller stretched cluster” on page 903.

2.8.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive experience seek time and latency time at the drive level, which can result
in 1 ms - 10 ms of response time (for an enterprise-class disk).

46 Implementing the IBM System Storage SAN Volume Controller V7.4


The new 2145-DH8 nodes that are combined with the SVC provide 32 GB (optional 64 GB,
with the second CPU installed, which offers more processor power and memory for the
Real-time Compression (RtC) feature) memory per node, or 128 GB per I/O Group, or
512 GB per SVC system. The SVC provides a flexible cache model, and the node’s memory
can be used as read or write cache. The size of the write cache is limited to a maximum of 12
GB of the node’s memory. Depending on the current I/O conditions on a node, the entire 32
GB of memory can be fully used as read cache.

Cache is allocated in 4 KB segments. A segment holds part of one track. A track is the unit of
locking and destaging granularity in the cache. The cache virtual track size is 32 KB (eight
segments). A track might be only partially populated with valid pages. The SVC coalesces
writes up to a 256 KB track size if the writes are in the same tracks before destage. For
example, if 4 KB is written into a track, another 4 KB is written to another location in the same
track. Therefore, the blocks that are written from the SVC to the disk subsystem can be any
size between 512 bytes up to 256 KB. The large cache and advanced cache management
algorithms within the SVC 2145-DH8 allow it to improve on the performance of many types of
underlying disk technologies. The SVC’s capability to manage, in the background, the
destaging operations that are incurred by writes (in addition to still supporting full data
integrity) assists with SVC’s capability in achieving good database performance.

Many changes were made to the way that SVC uses its cache in the 7.3 code level. The
cache is separated into two layers: an upper cache and a lower cache.

Figure 2-15 shows the separation of the upper and lower cache.

New Dual Layer Cache


Architecture

 Upper Cache –simple


write cache

 Lower Cache –
algorithm intelligence

 Shared buffer space


between two layers

Figure 2-15 Separation of upper and lower cache

Chapter 2. IBM SAN Volume Controller 47


The upper cache delivers the following functionality, which allows the SVC to streamline data
write performance:
򐂰 Provides fast write response times to the host by being as high up in the I/O stack as
possible
򐂰 Provides partitioning

The lower cache delivers the following additional functionality:


򐂰 Ensures that the write cache between two nodes is in synch
򐂰 Caches partitioning to ensure that a slow back end cannot consume the entire cache
򐂰 Uses a destage algorithm that adapts to the amount of data and the back-end
performance
򐂰 Provides read caching and prefetching

Combined, the two levels of cache also deliver the following functionality:
򐂰 Pins data when the LUN goes offline
򐂰 Provides enhanced statistics for Tivoli Storage Productivity Center and maintains
compatibility with an earlier version
򐂰 Provides trace for debugging
򐂰 Reports medium errors
򐂰 Resynchronizes cache correctly and provides the atomic write functionality
򐂰 Ensures that other partitions continue operation when one partition becomes 100% full of
pinned data
򐂰 Supports fast-write (two-way and one-way), flush-through, and write-through
򐂰 Integrates with T3 recovery procedures
򐂰 Supports two-way operation
򐂰 Supports none, read-only, and read/write as user-exposed caching policies
򐂰 Supports flush-when-idle
򐂰 Supports expanding cache as more memory becomes available to the platform
򐂰 Supports credit throttling to avoid I/O skew and offer fairness/balanced I/O between the
two nodes of the I/O Group
򐂰 Enables switching of the preferred node without needing to move volumes between I/O
Groups

Depending on the size, age, and technology level of the disk storage system, the total
available cache in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in either the SVC or the
disk controller level of the overall system, the system as a whole can take advantage of the
larger amount of cache wherever the cache is located. Therefore, if the storage controller
level of the cache has the greater capacity, expect hits to this cache to occur, in addition to
hits in the SVC cache.

Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on both the underlying storage technology and the degree to which the workload
exhibits hotspots or sensitivity to cache size or cache algorithms.

48 Implementing the IBM System Storage SAN Volume Controller V7.4


2.8.4 Clustered system management
The SVC can be managed by one of the following interfaces:
򐂰 A text command-line interface (CLI), which is accessed through a Secure Shell (SSH)
connection, for example, PuTTY
򐂰 A web browser-based graphical user interface (GUI)
򐂰 Tivoli Storage Productivity Center

The GUI and a web server are installed in the SVC system nodes. Therefore, any browser
can access the management GUI if the browser is pointed at the system IP address.

Management console
The management console for the SVC is referred to as the IBM System Storage Productivity
Center (SSPC). This appliance is no longer needed. The SVC can be reached through the
internal management GUI.

2.9 User authentication


The SVC provides the following methods of user authentication to control access to the
web-based management interface (GUI) and CLI:
򐂰 Local authentication is performed within the SVC system.
The available local CLI authentication methods are SSH key authentication and user
name and password (which was introduced with SVC 6.3). For more information about the
CLI setup, see 4.4, “Secure Shell overview” on page 154.
Local GUI authentication is done by using the user name and password. For more
information about the GUI setup, see 4.3, “Configuring the GUI” on page 153.
򐂰 Remote authentication means that the validation of a user’s permission to access the
SVC’s management CLI or GUI is performed at a remote authentication server. That is,
except for the superuser account, no local user account administration is necessary on the
SVC.
You can use an existing user management system in your environment to control the SVC
user access, which implements a single sign-on (SSO) for the SVC.

2.9.1 Remote authentication through LDAP


Until SVC 6.2, the only supported remote authentication service was the Tivoli Embedded
Security Services, which is part of the Tivoli Integrated Portal. Beginning with SVC 6.3,
remote authentication through native Lightweight Directory Access Protocol (LDAP) was
introduced. The supported types of LDAP servers are IBM Tivoli Directory Server, Microsoft
Active Directory (MS AD), and OpenLDAP, for example, running on a Linux system.

Users that are authenticated by an LDAP server can log in to the SVC web-based GUI and
the CLI. Unlike remote authentication through Tivoli Integrated Portal, users do not need to be
configured locally for CLI access. An SSH key is not required for CLI login in this scenario,
either. However, locally administered users can coexist with remote authentication enabled.
The default administrative user that uses the name superuser must be a local user. The
superuser cannot be deleted or manipulated, except for the password and SSH key.

Chapter 2. IBM SAN Volume Controller 49


If multiple LDAP servers are available, you can assign multiple LDAP servers to improve
availability. Authentication requests are processed by those LDAP servers that are marked as
preferred unless the connections fail or a user is not found. Requests are distributed across
all preferred servers for load balancing in a round-robin fashion.

A user that is authenticated remotely by an LDAP server is granted permissions on the SVC
according to the role that is assigned to the group of which it is a member. That is, any SVC
user group with its assigned role, for example, CopyOperator, must exist with an identical
name on the SVC system and on the LDAP server, if users in that role are to be authenticated
remotely.

You must adhere to the following guidelines:


򐂰 Native LDAP authentication or Tivoli Integrated Portal can be selected, but not both.
򐂰 If more than one LDAP server is defined, they all must be of the same type, for example,
MS AD.
򐂰 The SVC user group must be enabled for remote authentication.
򐂰 The user group name must be identical in the SVC user group management and on the
LDAP server; the user group name is case sensitive.
򐂰 The LDAP server must transmit a group membership attribute for the user. The default
attribute name for MS AD and OpenLDAP is memberOf. The default attribute name for
Tivoli Directory Server is ibm-allGroups. For OpenLDAP implementations, you might need
to configure the memberOf overlay if it is not in place.

In the following example, we demonstrate LDAP user authentication that uses a Microsoft
Windows Server 2008 R2 domain controller that is acting as an LDAP server.

Complete the following steps to configure remote authentication:


1. Configure the SVC for remote authentication by selecting Settings → Security, as shown
in Figure 2-16.

Figure 2-16 Configure Remote Authentication

2. Click Configure Remote Authentication.


3. Select the authentication type, as shown in Figure 2-17 on page 51. Select LDAP and
then click Next.

50 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 2-17 Select the authentication type

4. You must configure the following parameters in the Configure Remote Authentication
window, as shown in Figure 2-18 and Figure 2-19 on page 52:
– For LDAP Type, select Microsoft Active Directory. (For an OpenLDAP server, select
Other for the type of LDAP server.)
– For Security, choose None. (If your LDAP server requires a secure connection, select
Transport Layer Security; the LDAP server’s certificate is configured later.)
– Click Advanced Settings to expand the bottom part of the window. Leave the User
Name and Password fields empty if your LDAP server supports anonymous bind. For
our MS AD server, we enter the credentials of an existing user on the LDAP server with
permission to query the LDAP directory. You can enter this information in the format of
an email address, for example, administrator@itso.corp, or in the distinguished
format, for example, cn=Administrator,cn=users,dc=itso,dc=corp. Note the common
name portion cn=users for MS AD servers.
– If your LDAP server uses separate attributes from the predefined attributes, you can
edit them here. You do not need to edit the attributes when MS AD is used as the LDAP
service.

Figure 2-18 Configure Remote Authentication

Chapter 2. IBM SAN Volume Controller 51


Figure 2-19 Configure Remote Authentication Advanced Settings

5. Figure 2-20 shows the Configure Remote Authentication window, where we configure the
following LDAP server details:
– Enter the IP address of at least one LDAP server.
– Although it is marked as optional, it might be required to enter a Base DN in the
distinguished name format, which defines the starting point in the directory at which to
search for users, for example, dc=itso,dc=corp.
– You can add more LDAP servers by clicking the plus (+) icon.
– Check Preferred if you want to use preferred LDAP servers.
– Click Finish to save the settings.

Figure 2-20 LDAP servers configuration

Now that the SVC for Remote Authentication is enabled and configured, we work with the
user groups. For remote authentication through LDAP, no local SVC users are maintained, but
the user groups must be set up correctly. The existing built-in SVC user groups can be used
and groups that are created in SVC user management can be used. However, the use of
self-defined groups might be advisable to avoid the SVC default groups from interfering with
the existing group names on the LDAP server. Any user group, whether built-in or
self-defined, must be enabled for remote authentication.

52 Implementing the IBM System Storage SAN Volume Controller V7.4


Complete the following steps to create a user group:
1. Select Access → Users → Create User Group. As shown in Figure 2-21, we create a
user group.

Figure 2-21 Create User Group window

2. In the Create User Group window that is shown in Figure 2-21, complete the following
steps:
a. Enter a meaningful Group Name (for example, SVC_LDAP_CopyOperator), according to
its intended role.
b. Select the Role that you want to use by clicking Copy Operator.
c. To mark LDAP for Remote Authentication, select Enable for this group and then click
Create.
You can modify these settings in a group’s properties at any time.

Next, we complete the following steps to create a group with the same name on the LDAP
server, that is, in the Active Directory Domain:
1. On the Domain Controller, start the Active Directory Users and Computers management
console and browse your domain structure to the entity that contains the user groups.
Click the Create new user group icon as highlighted in Figure 2-22 on page 54 to create
a group.

Chapter 2. IBM SAN Volume Controller 53


Figure 2-22 Create a user group on the LDAP server

2. Enter the same name, SVC_LDAP_CopyOperator, in the Group Name field, as shown in
Figure 2-23. (The name is case sensitive.) Select the correct Group scope for your
environment and select Security for Group type. Click OK.

Figure 2-23 Edit the group properties

3. Edit the user’s properties so that the user can log in to the SVC. Make the user a member
of the appropriate user group for the intended SVC role, as shown in Figure 2-24 on
page 55, and click OK to save and apply the settings.

54 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 2-24 Make the user a member of the appropriate group

We are now ready to authenticate the users for the SVC through the remote server. To ensure
that everything works correctly, we complete the following steps to run a few tests to verify the
communication between the SVC and the configured LDAP service:
1. Select Settings → Security, and then select Global Actions → Test LDAP
Connections, as shown in Figure 2-25.

Figure 2-25 LDAP connections test

Figure 2-26 on page 56 shows the result of a successful connection test.

Chapter 2. IBM SAN Volume Controller 55


Figure 2-26 Successful LDAP connection test

2. We test a real user authentication attempt. Select Settings → Security, then select
Global Actions → Test LDAP Authentication, as shown in Figure 2-27.

Figure 2-27 Test LDAP Authentication

3. As shown in Figure 2-28, enter the User Credentials of a user that was defined on the
LDAP server, and then click Test.

Figure 2-28 LDAP authentication test

56 Implementing the IBM System Storage SAN Volume Controller V7.4


The message, CMMVC7148I Task completed successfully, is shown after a successful
test.

Both the LDAP connection test and the LDAP authentication test must complete successfully
to ensure that the LDAP authentication works correctly. In our example, an error message
points to user authentication problems during the LDAP authentication test. It might help to
analyze the LDAP server’s response outside of the SVC. You can use any native LDAP query
tool, for example, the no-charge software LDAPBrowser tool, which is available at this
website:
http://www.ldapbrowser.com/

For a pure MS AD environment, you can use the Microsoft Sysinternals ADExplorer tool,
which is available at this website:
http://technet.microsoft.com/en-us/sysinternals/bb963907

Assuming that the LDAP connection and the authentication test succeeded, users can log in
to the SVC GUI and CLI by using their network credentials, for example, their Microsoft
Windows domain user name and password.

Figure 2-29 shows the web GUI login window with the Windows domain credentials entered.
A user can log in with their short name (that is, without the domain component) or with the
fully qualified user name in the form of an email address.

Figure 2-29 GUI login

After a successful login, the user name is displayed in a welcome message in the upper-right
corner of the window, as highlighted in Figure 2-30 on page 58.

Chapter 2. IBM SAN Volume Controller 57


Figure 2-30 Welcome message after a successful login

Logging in by using the CLI is possible with the short user name or the fully qualified name.
The lscurrentuser CLI command displays the user name of the currently logged in user and
their role.

2.9.2 SAN Volume Controller user names


User names must be unique and they can contain up to 256 printable ASCII characters.

Forbidden characters are the single quotation mark (‘), colon (:), percent symbol (%),
asterisk (*), comma (,), and double quotation marks (“).

Also, a user name cannot begin or end with a blank space.

Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden
characters; however, passwords cannot begin or end with blanks.

2.9.3 SAN Volume Controller superuser


A special local user that is called the superuser always exists on every system. It cannot be
deleted. Its password is set by the user during clustered system initialization. The superuser
password can be reset from the node’s front panel. (This function can be disabled.) However,
disabling the superuser makes the system inaccessible if all the users forget their passwords
or lose their SSH keys.

To register an SSH key for the superuser to provide command-line access, select Service
Assistant → Configure CLI Access to assign a temporary key. However, the key is lost
during a node restart. The permanent way to add the key is through the normal GUI; select
User Management → superuser → Properties to register the SSH key for the superuser.
The superuser is always a member of user group 0, which has the most privileged role within
the SVC.

58 Implementing the IBM System Storage SAN Volume Controller V7.4


2.9.4 SAN Volume Controller Service Assistant Tool
The SVC has a tool for performing service tasks on the system. In addition to performing
various service tasks from the front panel, you can service a node through an Ethernet
connection by using a web browser to access a GUI interface. The function is called the
Service Assistant Tool and requires you to enter the superuser password during login.

2.9.5 SAN Volume Controller roles and user groups


Each user group is associated with a single role. The role for a user group cannot be
changed, but user groups (with one of the defined roles) can be created.

User groups are used for local and remote authentication. Because the SVC knows of five
roles, by default, five user groups are defined in an SVC system, as shown in Table 2-3.

Table 2-3 User groups


User group ID User group Role

0 SecurityAdmin SecurityAdmin

1 Administrator Administrator

2 CopyOperator CopyOperator

3 Service Service

4 Monitor Monitor

The access rights for a user who belongs to a specific user group are defined by the role that
is assigned to the user group. It is the role that defines what a user can or cannot do on an
SVC system.

Table 2-4 on page 60 shows the roles ordered (from the top) by the least privileged Monitor
role down to the most privileged SecurityAdmin role. The NasSystem role has no special user
group.

Chapter 2. IBM SAN Volume Controller 59


Table 2-4 Commands that are permitted for each role
Role Commands that are allowed by role

Monitor All svcinfo or informational commands and svctask finderr, dumperrlog,


dumpinternallog, chcurrentuser, ping, svcconfig backup, and svqueryclock

Service All commands that are allowed for the Monitor role and applysoftware,
setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk,
includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,
startstats, stopstats, and setsystemtime

CopyOperator All commands allowed for the Monitor role and prestartfcconsistgrp,
startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp,
switchrcconsistgrp, chrcconsistgrp, startrcrelationship,
stoprcrelationship, switchrcrelationship, chrcrelationship, and
chpartnership

Administrator All commands, except chauthservice, mkuser, rmuser, chuser, mkusergrp,


rmusergrp, chusergrp, and setpwdreset

SecurityAdmin All commands, except those commands that are allowed by the NasSystem
role

NasSystem svctask: addmember, activatemember, and expelmember


Create and delete file system volumes.

2.9.6 SAN Volume Controller local authentication


Local users are users that are managed entirely on the clustered system without the
intervention of a remote authentication service. Local users must have a password or an SSH
public key, or both. Key authentication is attempted first with the password as a fallback. The
password and the SSH key are used for command-line or file transfer (SecureCopy) access.
For GUI access, only the password is used.

Local users: Local users are created for each SVC system. Each user has a name, which
must be unique across all users in one system.

If you want to allow access for a user on multiple systems, you must define the user in each
system with the same name and the same privileges.

A local user always belongs to only one user group. Figure 2-31 on page 61 shows an
overview of local authentication within the SVC.

60 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 2-31 Simplified overview of SVC local authentication

2.9.7 SAN Volume Controller remote authentication and single sign-on


You can configure an SVC system to use a remote authentication service. Remote users are
users that are managed by the remote authentication service and require command-line or
file-transfer access.

Remote users must be defined only in the SVC system if command-line access is required.
No local user is required for GUI-only remote access. For command-line access, the remote
authentication flag must be set and its password must be defined for the user. For users that
require CLI access with remote authentication, the password must be defined locally for the
users.

Remote users cannot belong to any user group because the remote authentication service,
for example, an LDAP directory server, such as IBM Tivoli Directory Server or Microsoft
Active Directory, delivers the user group information.

Figure 2-32 on page 62 gives an overview of SVC remote authentication.

Chapter 2. IBM SAN Volume Controller 61


Figure 2-32 Simplified overview of SVC remote authentication

The authentication service that is supported by the SVC is the Tivoli Embedded Security
Services server component level 6.2.

The Tivoli Embedded Security Services server provides the following key features:
򐂰 Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in
use, which means that the SVC communicates only with Tivoli Embedded Security
Services to get its authentication information. The type of protocol that is used to access
the central directory or the kind of the directory system that is used is not apparent to the
SVC.
򐂰 Tivoli Embedded Security Services provides a secure token facility that is used to enable
single sign-on (SSO). SSO means that users do not have to log in multiple times when
they are using what appears to them to be a single system. SSO is used within Tivoli
Productivity Center. When the SVC access is started from within Tivoli Productivity
Center, the user does not have to log in to the SVC because the user logged in to Tivoli
Productivity Center.

Using a remote authentication service


Complete the following steps to use the SVC with a remote authentication service:
1. Configure the system with the location of the remote authentication server:
– Change settings by using the following command:
svctask chauthservice
– View current settings by using the following command:
svcinfo lscluster
The SVC supports an HTTP or HTTPS connection to the Tivoli Embedded Security
Services server. If the HTTP option is used, the user and password information is
transmitted in clear text over the IP network.

62 Implementing the IBM System Storage SAN Volume Controller V7.4


2. Configure user groups on the system that match those user groups that are used by the
authentication service. For each group of interest that is known to the authentication
service, an SVC user group must exist with the same name and the remote setting
enabled.
For example, you can have a group that is called sysadmins, whose members require the
SVC Administrator role. Configure this group by using the following command:
svctask mkusergrp -name sysadmins -remote -role Administrator
If none of a user’s groups match any of the SVC user groups, the user is not permitted to
access the system.
3. Configure users that do not require SSH access. Any SVC users that use the remote
authentication service and do not require SSH access must be deleted from the system.
The superuser cannot be deleted; it is a local user and cannot use the remote
authentication service.
4. Configure users that require SSH access. Any SVC users that use the remote
authentication service and require SSH access must have their remote setting enabled
and the same password set on the system and the authentication service. The remote
setting instructs the SVC to consult the authentication service for group information after
the SSH key authentication step to determine the user’s role. The need to configure the
user’s password on the system in addition to the authentication service is because of a
limitation in the Tivoli Embedded Security Services server software.
5. Configure the system time. For correct operation, the SVC system and the system that is
running the Tivoli Embedded Security Services server must have the same view of the
current time. The easiest way is to have them both use the same Network Time Protocol
(NTP) server.

Note: Failure to follow this step can lead to poor interactive performance of the SVC
user interface or incorrect user-role assignments.

Also, Tivoli Storage Productivity Center uses the Tivoli Integrated Portal infrastructure and its
underlying IBM WebSphere® Application Server capabilities to use an LDAP registry and
enable SSO.

For more information about implementing SSO within Tivoli Storage Productivity Center 4.2,
see the chapter about LDAP authentication support and SSO in IBM Tivoli Storage
Productivity Center V4.2 Release Guide, SG24-7894, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247894.html?Open

2.10 SAN Volume Controller hardware overview


As defined in the underlying COMPASS architecture, the hardware nodes are based on Intel
processors with standard PCI-X adapters to interface with the SAN and the LAN.

The new SVC 2145-DH8 Storage Engine has the following key hardware features:
򐂰 One or two Intel Xeon E5 v2 Series eight-core processors, each with 32 GB memory
򐂰 16 Gb FC, 8 Gb FC, 10 Gb Ethernet, and 1 Gb Ethernet I/O ports for FC, iSCSI, and Fibre
Channel over Ethernet (FCoE) connectivity
򐂰 Optional feature: Hardware-assisted compression acceleration
򐂰 Optional feature: 12 Gb SAS expansion enclosure attachment for internal flash storage)

Chapter 2. IBM SAN Volume Controller 63


򐂰 Two integrated battery units
򐂰 2U, 19-inch rack mount enclosure with ac power supplies

Model 2145-DH8 includes three 1 Gb Ethernet ports standard for iSCSI connectivity. Model
2145-DH8 can be configured with up to four I/O adapter features that provide up to eight
16 Gb FC ports, up to twelve 8 Gb FC ports, or up to four 10 Gb Ethernet (iSCSI/Fibre
Channel over Ethernet (FCoE)) ports. For more information, see the optional feature section
in the knowledge center:
https://ibm.biz/BdEPQ6

Real-time Compression workloads can benefit from Model 2145-DH8 configurations with two
eight-core processors with 64 GB of memory (total system memory). Compression workloads
can also benefit from the hardware-assisted acceleration that is offered by the addition of up
to two compression accelerator cards. The SVC Storage Engines can be clustered to help
deliver greater performance, bandwidth, and scalability. An SVC clustered system can contain
up to four node pairs or I/O Groups. Model 2145-DH8 storage engines can be added into
existing SVC clustered systems that include previous generation storage engine models.

For more information, see IBM SAN Volume Controller Software Installation and
Configuration Guide, GC27-2286.

For more information about integration into existing clustered systems, compatibility, and
interoperability with installed nodes and uninterruptible power supplies, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1002999

The SVC 2145-DH8 includes preinstalled V7.4 software.

Figure 2-33 shows the front-side view of the SVC 2145-DH8 node.

Figure 2-33 SVC 2145-DH8 storage engine

64 Implementing the IBM System Storage SAN Volume Controller V7.4


2.10.1 Fibre Channel interfaces
The SVC provides link speeds of 8/16 Gbps on SVC 2145-DH8 nodes. The nodes include up
to three 4-port 8 Gbps HBAs or up to four 2-port 16 Gbps HBAs. The FC ports on these node
types auto-negotiate the link speed that is used with the FC switch. The ports normally
operate at the maximum speed that is supported by the SVC port and the switch. However, if
many link errors occur, the ports might operate at a lower speed than what is supported.

The actual port speed for each of the ports can be displayed through the GUI, CLI, the node’s
front panel, and by light-emitting diodes (LEDs) that are placed at the rear of the node.

For more information, see SAN Volume Controller Model 2145-DH8 Hardware Installation
Guide, GC27-6490. The PDF is at this website:
https://ibm.biz/BdEzM7

The SVC imposes no limit on the FC optical distance between SVC nodes and host servers.
FC standards, with small form-factor pluggable optics (SFP) capabilities and cable type,
dictate the maximum FC distances that are supported.

If longwave SFPs are used in the SVC nodes, the longest supported FC link between the
SVC and switch is 40 km (24.85 miles).

Table 2-5 shows the cable length that is supported by shortwave SFPs.

Table 2-5 Overview of supported cable length


FC-O OM1 (M6) OM2 (M5) OM3 (M5E) OM4 (M5F)
standard 62.2/125 standard 50/125 optimized 50/125 optimized 50/125
microseconds microseconds microseconds microseconds

2 Gbps FC 150 m (492.1 ft) 300 m (984.3 ft) 500 m (1640.5 ft) N/A

4 Gbps FC 70 m (229.7 ft) 150 m (492.1 ft) 380 m (1246.9 ft) 400 m (1312.34 ft)

8 Gbps FC limiting 20 m (68.10 ft) 50 m (164 ft) 150 m (492.1 ft) 190 m (623.36 ft)

16 Gbps FC 15 m (49.21 ft) 35 m (114.82 ft) 100 m (382.08 ft) 125 m (410.10 ft)

Table 2-6 shows the applicable rules that relate to the number of inter-switch link (ISL) hops
that is allowed in a SAN fabric between the SVC nodes or the system.

Table 2-6 Number of supported ISL hops


Between the nodes Between the nodes Between the nodes and Between the nodes and
in an I/O Group in separate I/O Groups the disk subsystem the host server

0 0 1 Maximum 3
(Connect to the same (Connect to the same (Recommended: 0,
switch.) switch.) connect to the same
switch.)

2.10.2 LAN interfaces


The 2145-DH8 node has three 1 Gbps LAN ports available. Also, this node supports 10 Gbps
Ethernet ports that can be used for iSCSI I/O.

Chapter 2. IBM SAN Volume Controller 65


The system configuration node can be accessed over the Technican Port. Therefore, the
clustered system can be managed by SSH clients or GUIs on System Storage Productivity
Centers on separate physical IP networks. This capability provides redundancy if one of these
IP networks fails.

Support for iSCSI introduces one other IPv4 and one other IPv6 address for each SVC node
port. These IP addresses are independent of the system configuration IP addresses. An IP
address overview is shown in Figure 2-12 on page 37.

2.10.3 FCoE interfaces


Since version 6.4, the SVC also includes FCoE support. FCoE is still in its infancy, but 16 Gbit
native Fibre Channel might be the last speed increase in mass production use. After that,
SANs and Ethernet networks finally converge with 40 Gbit and beyond. The various Fibre
Channel Forwarder (FCF) requirements on the Converged Enhanced Ethernet (CEE) FCoE
infrastructure mean that FCoE host attachment is supported. Disk and fabric attach are still
by way of native Fibre Channel. As with most SVC features, the FCoE support is a software
upgrade.

If you have SVC with the 10 Gbit features, FCoE support is added with an upgrade to version
6.4. The same 10 Gbit ports are iSCSI and FCoE capable. For performance, the FCoE ports
compare (regarding transport speed) with the native Fibre Channel ports (8 Gbit versus 10
Gbit), and recent enhancements to the iSCSI support mean that performance levels are
similar to iSCSI and Fibre Channel performance levels.

2.11 Flash drives


Flash drives can be used, or more specifically, single-layer cell (SLC) or multilayer cell (MLC)
NAND Flash-based disks, to overcome a growing problem that is known as the memory or
storage bottleneck.

2.11.1 Storage bottleneck problem


The memory or storage bottleneck describes the steadily growing gap between the time that
is required for a CPU to access data that is in its cache memory (typically in nanoseconds)
and data that is on external storage (typically in milliseconds).

Although CPUs and cache/memory devices continually improve their performance; in


general, mechanical disks that are used as external storage do not improve their
performance. Figure 2-34 on page 67 shows these access time differences.

66 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 2-34 The memory or storage bottleneck

The actual times that are shown are not that important, but a dramatic difference exists
between accessing data that is in cache and data that is on an external disk.

We added a second scale to Figure 2-34, which gives you an idea of how long it takes to
access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you
an idea of the importance of future storage technologies closing or reducing the gap between
access times for data that is stored in cache/memory versus access times for data that is
stored on an external medium.

Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they showed
remarkable performance regarding capacity growth, form factor, and size reduction, price
decrease (cost per GB), and reliability.

However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O did not improve at the same rate, although they certainly did improve. In
actual environments, we can expect from today’s enterprise-class FC serial-attached SCSI
(SAS) disk up to 200 IOPS per disk with an average response time (a latency) of
approximately 6 ms per I/O.

Table 2-7 shows a comparison of drive types and IOPS.

Table 2-7 Comparison of drive types to IOPS


Drive type IOPS

Nearline - SAS 90

SAS 10K 140

SAS 15K 200

Flash > 50000

Today’s rotating disks continue to advance in capacity (several TBs), form factor/footprint
(8.89 cm (3.5 inches), 6.35 cm (2.5 inches), and 4.57 cm (1.8 inches)), and price (cost per
GB), but they are not getting much faster.

Chapter 2. IBM SAN Volume Controller 67


The limiting factor is the number of revolutions per minute (RPM) that a disk can perform
(approximately 15,000). This factor defines the time that is required to access a specific data
block on a rotating device. Small improvements likely will occur in the future; however, a
significant step, such as doubling the RPM (if technically even possible), inevitably has an
associated increase in power usage and price that will be an inhibitor.

2.11.2 Flash Drive solution


Flash Drives can provide a solution for this dilemma. No rotating parts mean improved
robustness and lower power usage. A remarkable improvement in I/O performance and a
massive reduction in the average I/O response times (latency) are the compelling reasons to
use Flash Drives in today’s storage subsystems.

Enterprise-class Flash Drives typically deliver 85,000 read and 36,000 write IOPS with typical
latencies of 50 µs for reads and 800 µs for writes. Their form factors of 6.35 cm
(2.5 inches)/8.89 cm (3.5 inches) and their interfaces (FC/SAS/SATA) make them easy to
integrate into existing disk shelves.

2.11.3 Flash Drive market


The Flash Drive storage market is rapidly evolving. The key differentiator among today’s Flash
Drive products that are available on the market is not the storage medium, but the logic in the
disk internal controllers. The top priorities in today’s controller development are optimally
handling what is referred to as wear-out leveling, which defines the controller’s capability to
ensure a device’s durability, and closing the remarkable gap between read and write I/O
performance.

Today’s Flash Drive technology is only a first step into the world of high-performance
persistent semiconductor storage. A group of the approximately 10 most promising
technologies is collectively referred to as Storage Class Memory (SCM).

Storage Class Memory


SCM promises a massive improvement in performance (IOPS), areal density, cost, and
energy efficiency compared to today’s Flash Drive technology. IBM Research is actively
engaged in these new technologies.

For more information about nanoscale devices, see this website:


http://researcher.watson.ibm.com/researcher/view_project.php?id=4284

For more information about SCM, see this website:


https://ibm.biz/BdEPQ7

For a comprehensive overview of the Flash Drive technology in a subset of the well-known
Storage Networking Industry Association (SNIA) Technical Tutorials, see these websites:
򐂰 http://www.snia.org/education/tutorials/2010/spring#solid
򐂰 http://www.snia.org/education/tutorials/fms

When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.

68 Implementing the IBM System Storage SAN Volume Controller V7.4


2.11.4 Flash Drives and SAN Volume Controller
The SVC supports the use of internal (up to Model 2145-CG8) or external Flash Drives.

Internal Flash Drives


Certain SVC models support 0.76 m (2.5-inch) Flash Drives as internal storage. A maximum
of four drives can be installed per node, and up to 32 drives can be installed in a clustered
system. These drives can be used to create RAID MDisks that, in turn, can be used to create
volumes.

Internal Flash Drives can be configured in the following two RAID levels:
򐂰 RAID 1 - RAID 10: In this configuration, one half of the mirror is in each node of the I/O
Group, which provides redundancy if a node failure occurs.
򐂰 RAID 0: In this configuration, all the drives are assigned to the same node. This
configuration is intended to be used with Volume Mirroring because no redundancy is
provided if a node failure occurs.

External Flash Drives


The SVC can manage Flash Drives in externally attached storage controllers or enclosures.
The Flash Drives are configured as an array with a LUN and are presented to the SVC as a
normal MDisk. The solid-state MDisk tier then must be set by using the chmdisk -tier
generic_ssd command or the GUI.

The Flash MDisks can then be placed into a single Flash Drive tier storage pool.
High-workload volumes can be manually selected and placed into the pool to gain the
performance benefits of Flash Drives.

For a more effective use of Flash Drives, place the Flash Drive MDisks into a multitiered
storage pool that is combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier
turned on, Easy Tier automatically detects and migrates high-workload extents onto the
solid-state MDisks.

For more information about IBM Flash Storage, see this website:
http://www.ibm.com/systems/storage/flash/

2.12 What is new with the SAN Volume Controller 7.4


This section highlights the new features that the SVC 7.4 offers.

2.12.1 SAN Volume Controller 7.4 supported hardware list, device driver, and
firmware levels
With the SVC 7.4 release (as in every release), IBM offers functional enhancements and new
hardware that can be integrated into existing or new SVC systems and interoperability
enhancements or new support for servers, SAN switches, and disk subsystems. For the most
current information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658

Chapter 2. IBM SAN Volume Controller 69


2.12.2 SAN Volume Controller 7.4.0 new features
The SVC 7.4.0 includes the following new features:
򐂰 Support for child pools
In the current SVC system, the disk space of a storage pool is from MDisks, so the
capacity of a storage pool depends on the MDisks’ capacity. Creating/splitting a storage
pool is not free. The user cannot freely make a storage pool with a particular capacity that
they want. The child pool is a new object, which is created from a physical storage pool
and provides most of the functions of MDiskgroups (for example, volume creation), but the
user can specify the capacity of the child pool at creation.
򐂰 VLAN support for iSCSI and IP partnership
Virtual LAN (VLAN) tagging is a mechanism that is used by system administrators for
network traffic separation at the layer 2 level for Ethernet transport. Although network
traffic separation can be configured at the layer 3 level by using IP subnets, VLAN tagging
allows traffic separation at the layer 2 level.
򐂰 Volume protection
To prevent active volumes or host mappings from being deleted inadvertently, the system
supports a global setting that prevents these objects from being deleted if the system
detects that they have recent I/O activity.
򐂰 4K drive support
Advanced Format is a generic term that pertains to any disk sector format that is used to
store data on magnetic disks in hard disk drives (HDDs) that exceeds 512 - 520 bytes per
sector, such as the 4096-byte (4 KB) sectors of the first-generation Advanced Format
HDDs. Larger sectors use the storage surface area more efficiently for large files but less
efficiently for smaller files, and enable the integration of stronger error correction
algorithms to maintain data integrity at higher storage densities.
򐂰 T10 security
The Data Integrity Extension is an optional feature for direct access devices (peripheral
device type 00). The Data Integrity Extensions are designed to provide end-to-end
protection of user data against media and transmission errors.
򐂰 Support for Global Mirror over extended distances; links with up to 250 ms round-trip
latency between systems
򐂰 Encryption
This solution allows encrypted data to be stored on the SAS drives in the SVC/Storwize
enclosures.

2.13 Useful SAN Volume Controller web links


For more information about the SVC-related topics, see the following websites:
򐂰 SVC support:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004946
򐂰 SVC home page:
http://www.ibm.com/systems/storage/software/virtualization/svc/
򐂰 SVC Interoperability page:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658

70 Implementing the IBM System Storage SAN Volume Controller V7.4


򐂰 SVC online documentation:
http://www.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp
򐂰 IBM Redbooks publications about the SVC:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
򐂰 IBM developerWorks® is the premier web-based technical resource and professional
network for IT practitioners:
https://www.ibm.com/developerworks/community/blogs/storagevirtualization/tags/s
vc?lang=en

Chapter 2. IBM SAN Volume Controller 71


72 Implementing the IBM System Storage SAN Volume Controller V7.4
3

Chapter 3. Planning and configuration


In this chapter, we describe the required steps when you plan the Installation of an IBM SAN
Volume Controller (SVC) for 2145 models 8A4, 8G4, CF8, and CG8 in your storage network.
For information about 2145 Model DH8, see IBM SAN Volume Controller 2145-DH8
Introduction and Implementation, SG24-8229:
http://www.redbooks.ibm.com/abstracts/sg248229.html?Open

We also review the implications for your storage network and describe performance
considerations.

This chapter includes the following topics:


򐂰 General planning rules
򐂰 Physical planning
򐂰 Logical planning
򐂰 Performance considerations

© Copyright IBM Corp. 2015. All rights reserved. 73


3.1 General planning rules

Important: At the time of writing, the statements we make are correct, but they might
change. Always verify any statements that are made in this book with the SAN Volume
Controller supported hardware list, device driver, firmware, and recommended software
levels that are available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658

To achieve the most benefit from the SVC, preinstallation planning must include several
important steps. These steps ensure that the SVC provides the best possible performance,
reliability, and ease of management for your application needs. The correct configuration also
helps minimize downtime by avoiding changes to the SVC and the storage area network
(SAN) environment to meet future growth needs.

Note: For more information, see the Pre-sale Technical and Delivery Assessment (TDA)
document that is available at this website:
https://www.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/salib_SA572/lc=
en_ALL_ZZ

A pre-sale TDA needs to be conducted before a final proposal is submitted to a client and
must be conducted before an order is placed to ensure that the configuration is correct and
the solution that is proposed is valid. The preinstall System Assurance Planning Review
(SAPR) Package includes various files that are used in preparation for an SVC preinstall
TDA. A preinstall TDA needs to be conducted shortly after the order is placed and before
the equipment arrives at the client’s location to ensure that the client’s site is ready for the
delivery and responsibilities are documented regarding the client and IBM or the IBM
Business Partner roles in the implementation.

Tip: For more information about the topics that are described, see the following resources:
򐂰 IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551
򐂰 SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521,
which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Complete the following tasks when you are planning for the SVC:
򐂰 Collect and document the number of hosts (application servers) to attach to the SVC, the
traffic profile activity (read or write, sequential, or random), and the performance
requirements, which are I/O per second (IOPS).
򐂰 Collect and document the following storage requirements and capacities:
– The total back-end storage that is present in the environment to be provisioned on the
SVC
– The total back-end new storage to be provisioned on the SVC
– The required virtual storage capacity that is used as a fully managed virtual disk
(volume) and used as a Space-Efficient (SE) volume
– The required storage capacity for local mirror copy (volume mirroring)
– The required storage capacity for point-in-time copy (FlashCopy)

74 Implementing the IBM System Storage SAN Volume Controller V7.4


– The required storage capacity for remote copy (Metro Mirror and Global Mirror)
– The required storage capacity for compressed volumes
– Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes
򐂰 Define the local and remote SAN fabrics and systems in the cluster if a remote copy or a
secondary site is needed.
򐂰 Define the number of systems in the cluster and the number of pairs of nodes (1 - 4) for
each system. Each pair of nodes (an I/O Group) is the container for the volume. The
number of necessary I/O Groups depends on the overall performance requirements.
򐂰 Design the SAN according to the requirement for high availability and best performance.
Consider the total number of ports and the bandwidth that is needed between the host and
the SVC, the SVC and the disk subsystem, between the SVC nodes, and for the
inter-switch link (ISL) between the local and remote fabric.

Note: Check and carefully count the required ports for extended links. Especially in a
stretched cluster environment, you might need many of the higher-cost longwave
gigabit interface converters (GBICs).

򐂰 Design the iSCSI network according to the requirements for high availability and best
performance. Consider the total number of ports and bandwidth that is needed between
the host and the SVC.
򐂰 Determine the SVC service IP address.
򐂰 Determine the IP addresses for the SVC system and for the host that connects through
iSCSI.
򐂰 Determine the IP addresses for IP replication.
򐂰 Define a naming convention for the SVC nodes, host, and storage subsystem.
򐂰 Define the managed disks (MDisks) in the disk subsystem.
򐂰 Define the storage pools. The storage pools depend on the disk subsystem that is in place
and the data migration requirements.
򐂰 Plan the logical configuration of the volume within the I/O Groups and the storage pools to
optimize the I/O load between the hosts and the SVC.
򐂰 Plan for the physical location of the equipment in the rack.

The SVC planning can be categorized into the following types:


򐂰 Physical planning
򐂰 Logical planning

We describe these planning types in the following sections.

3.2 Physical planning


You must consider several key factors when you are planning the physical site of an SVC
installation. The physical site must have the following characteristics:
򐂰 Power, cooling, and location requirements are present for the SVC and the uninterruptible
power supply (UPS) units. (UPS is only valid for 2145 models 8A4, 8G4, CF8, and CG8)
򐂰 The SVC nodes and their uninterruptible power supply units must be in the same rack.

Chapter 3. Planning and configuration 75


򐂰 You must plan for two separate power sources if you have a redundant ac-power switch,
which is available as an optional feature.
򐂰 An SVC node (valid for 2145 model 8A4, 8G4, CF8, and CG8) is one Electronic Industries
Association (EIA) unit high. Model 2145-DH8 is two units high.
򐂰 Other hardware devices can be in the same SVC rack, such as IBM Storwize V7000, IBM
Storwize V3700, SAN switches, an Ethernet switch, and other devices.
򐂰 You must consider the maximum power rating of the rack; do not exceed it. For more
information about the power requirements, see this website:
http://www-01.ibm.com/support/knowledgecenter/STPVGU_7.3.0/com.ibm.storage.svc.
console.730.doc/svc_ichome_730.html

3.2.1 Preparing your uninterruptible power supply unit environment


Ensure that your physical site meets the installation requirements for the uninterruptible
power supply unit.

2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high, is included,
and can operate on the following node types only:
򐂰 SVC 2145-8A4
򐂰 SVC 2145-8G4
򐂰 SVC 2145-CF8
򐂰 SVC 2145-CG8

When the 2145 UPS-1U is configured, the voltage that is supplied to it must be 200 - 240 V,
single phase.

Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.

2145 DH8
This model includes two integrated AC power supplies and battery units, replacing the
uninterruptible power supply feature that was required on the previous generation storage
engine models.

The functionality of uninterruptible power supply units is provided by internal batteries, which
are delivered with each node’s hardware. They ensure sufficient internal power to keep a
node operational to perform this dump when the external power is removed.

After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts
down.

For more information about IBM SAN Volume Controller 2145-DH8 Introduction and
Implementation, SG24-8229, see this website:
http://www.redbooks.ibm.com/abstracts/SG248229.html?Open

76 Implementing the IBM System Storage SAN Volume Controller V7.4


3.2.2 Physical rules
The SVC must be installed in pairs to provide high availability, and each node in the clustered
system must be connected to a separate uninterruptible power supply unit.

Be aware of the following considerations:


򐂰 Each SVC node of an I/O Group must be connected to a separate uninterruptible power
supply unit.
򐂰 Each uninterruptible power supply unit pair that supports a pair of nodes must be
connected to a separate power domain (if possible) to reduce the chances of input power
loss.
򐂰 For safety reasons, the uninterruptible power supply units must be installed in the lowest
positions in the rack. If necessary, move lighter units toward the top of the rack to make
way for the uninterruptible power supply units.
򐂰 The power and serial connection from a node must be connected to the same
uninterruptible power supply unit; otherwise, the node cannot start.
򐂰 SVC hardware models 2145-8A4, 2145-8G4, and 2145-CF8 must be connected to a 5115
uninterruptible power supply unit. They cannot start with a 5125 uninterruptible power
supply unit. The 2145-CG8 uses the 8115 uninterruptible power supply unit.

Important: Do not share the SVC uninterruptible power supply unit with any other devices.

Figure 3-1 on page 78 shows a power cabling example for the 2145-CG8.

Chapter 3. Planning and configuration 77


Figure 3-1 2145-CG8 power cabling

You must follow the guidelines for Fibre Channel (FC) cable connections. Occasionally, the
introduction of a new SVC hardware model means internal changes. One example is the
worldwide port name (WWPN) mapping in the port mapping. The 2145-8A4, 2145-8G4,
2145-CF8, and 2145-CG8 have the same mapping.

Figure 3-2 on page 79 shows the WWPN mapping.

78 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 3-2 WWPN mapping

Figure 3-3 on page 80 shows a sample layout in which nodes within each I/O Group are split
between separate racks. This layout protects against power failures and other events that
affect only a single rack.

Chapter 3. Planning and configuration 79


Figure 3-3 Sample rack layout

3.2.3 Cable connections


Create a cable connection table or documentation that follows your environment’s
documentation procedure to track all of the following connections that are required for the
setup:
򐂰 Nodes
򐂰 Uninterruptible power supply unit
򐂰 Ethernet
򐂰 iSCSI or Fibre Channel over Ethernet (FCoE) connections
򐂰 FC ports

3.3 Logical planning


For logical planning, we describe the following topics:
򐂰 Management IP addressing plan
򐂰 SAN zoning and SAN connections
򐂰 iSCSI IP addressing plan
򐂰 IP Mirroring
򐂰 Back-end storage subsystem configuration
򐂰 SAN Volume Controller clustered system configuration

80 Implementing the IBM System Storage SAN Volume Controller V7.4


򐂰 Stretched cluster system configuration
򐂰 Storage pool configuration
򐂰 Volume configuration
򐂰 Host mapping (LUN masking)
򐂰 Advanced Copy Services
򐂰 SAN boot support
򐂰 Data migration from a non-virtualized storage subsystem
򐂰 SVC configuration backup procedure

3.3.1 Management IP addressing plan


For management, remember the following rules:
򐂰 In addition to an FC connection, each node has an Ethernet connection for configuration
and error reporting.
򐂰 Each SVC clustered system needs at least one IP address for management and one IP
address per node to be used for service, with the new Service Assistant feature that was
available starting with SVC 6.1.
The service IP address is usable only from the non-configuration node or when the SVC
system is in service mode. Service mode is a disruptive operation. Both IP addresses
must be in the same IP subnet, as shown in Example 3-1.

Example 3-1 Management IP address sample


management IP add. 10.11.12.120
service IP add. 10.11.12.121

򐂰 Each node in an SVC clustered system must have at least one Ethernet connection.

Starting with SVC 6.1, the system management is performed through an embedded GUI that
is running on the nodes. A separate console, such as the traditional SVC Hardware
Management Console (HMC) or IBM System Storage Productivity Center (SSPC), is no
longer required to access the management interface. To access the management GUI, you
direct a web browser to the system management
IP address.

The clustered system first must be created that specifies an IPv4 or an IPv6 system address
for port 1. After the clustered system is created, more IP addresses can be created on port 1
and port 2 until both ports have an IPv4 and an IPv6 address that are defined. This design
allows the system to be managed on separate networks, therefore providing redundancy if a
network fails.

Figure 3-4 on page 82 shows the IP configuration possibilities.

Chapter 3. Planning and configuration 81


Figure 3-4 IP configuration possibilities

Support for iSCSI provides one other IPv4 and one other IPv6 address for each Ethernet port
on every node. These IP addresses are independent of the clustered system configuration
IP addresses.

The SVC Model 2145-CG8 optionally can have a serial-attached SCSI (SAS) adapter with
external ports disabled or a high-speed 10 Gbps Ethernet adapter with two ports. Two more
IPv4 or IPv6 addresses are required in both cases.

When you are accessing the SVC through the GUI or Secure Shell (SSH), choose one of the
available IP addresses to which to connect. No automatic failover capability is available. If one
network is down, use an IP address on the alternative network. Clients might use intelligence
in domain name servers (DNS) to provide partial failover.

3.3.2 SAN zoning and SAN connections


SAN storage systems that use the SVC can be configured with a minimum of two (and up to
eight) SVC nodes, which are arranged in an SVC clustered system. These SVC nodes are
attached to the SAN fabric, with disk subsystems and host systems.

82 Implementing the IBM System Storage SAN Volume Controller V7.4


The SAN fabric is zoned to allow the SVC to “see” each other’s nodes and the disk
subsystems, and for the hosts to “see” the SVC nodes.

The hosts cannot directly see or operate LUNs on the disk subsystems that are assigned to
the SVC system. The SVC nodes within an SVC system must see each other and all of the
storage that is assigned to the SVC system.

The zoning capabilities of the SAN switch are used to create three distinct zones. The SVC
7.4 supports 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps FC fabric, depending on the hardware
platform and on the switch where the SVC is connected. In an environment where you have a
fabric with multiple-speed switches, the preferred practice is to connect the SVC and the disk
subsystem to the switch operating at the highest speed.

All SVC nodes in the SVC clustered system are connected to the same SANs, and they
present volumes to the hosts. These volumes are created from storage pools that are
composed of MDisks that are presented by the disk subsystems. The fabric must have the
following distinct zones:
򐂰 SVC clustered system zone
Create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC
internode communication.
򐂰 Host zones
Create an SVC host zone for each server that is accessing storage from the SVC system.
򐂰 Storage zone
Create one SVC storage zone for each storage subsystem that is virtualized by the SVC.

Port designation recommendations


The port to local node communication is used for mirroring write cache and metadata
exchange between nodes and it is critical to the stable operation of the cluster. The DH8
nodes with their 8-port and 12-port configurations provide an opportunity to isolate the port to
local node traffic from other cluster traffic on dedicated ports, therefore, providing a level of
protection against misbehaving devices and workloads that can compromise the performance
of the shared ports.

Additionally, isolating remote replication traffic on dedicated ports is beneficial. This isolation
ensures that problems that affect the cluster-to-cluster interconnection do not adversely affect
ports on the primary cluster and therefore affect the performance of workloads running on the
primary cluster.
IBM recommends the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 3-1 on page 84.

Important: Be careful when you perform the zoning so that inter-node ports are not used
for Host/Storage traffic in the 8-port and 12-port configurations.

Chapter 3. Planning and configuration 83


Table 3-1 Port designation recommendations for isolating traffic
Card/ SAN Four-port Eight-port Twelve-port nodes Twelve-port nodes
port fabric nodes nodes (Write data rate greater
than 3 GBps per I/O
Group)

C1P1 A Host/Storage/ Host/Storage Host/Storage Inter-node


Inter-node

C1P2 B Host/Storage/ Host/Storage Host/Storage Inter-node


Inter-node

C1P3 A Host/Storage/ Host/Storage Host/Storage Host/Storage


Inter-node

C1P4 B Host/Storage/ Host/Storage Host/Storage Host/Storage


Inter-node

C2P1 A Inter-node Inter-node Inter-node

C2P2 B Inter-node Inter-node Inter-node

C2P3 A Replication or Host/Storage Host/Storage


Host/Storage

C2P4 B Replication or Host/Storage Host/Storage


Host/Storage

C5P1 A Host/Storage Host/Storage

C5P2 B Host/Storage Host/Storage

C5P3 A Replication or Replication or Host/Storage


Host/Storage

C5P4 B Replication or Replication or Host/Storage


Host/Storage

localfcport 1111 110000 110000 110011


mask

This recommendation provides the traffic isolation that you want and also simplifies migration
from existing configurations with only four ports, or even later migration from 8-port or 12-port
configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be
considered. However, these approaches do not appreciably increase availability of the
solution because the mean time between failures (MTBF) of the adapter is not significantly
less than that of the non-redundant node components.

Consider that although it is true that alternate port mappings that spread traffic across host
bus adapters (HBAs) can allow adapters to come back online following a failure, they will not
prevent a node from going offline temporarily to reboot and attempt to isolate the failed
adapter and then rejoin the cluster. Our recommendation considers these issues with a view
that the greater complexity might lead to migration challenges in the future, and therefore, the
simpler approach is best.

Zoning considerations for Metro Mirror and Global Mirror


Ensure that you are familiar with the constraints on zoning a switch to support Metro Mirror
and Global Mirror partnerships. SAN configurations that use intracluster Metro Mirror and
Global Mirror relationships do not require more switch zones.

84 Implementing the IBM System Storage SAN Volume Controller V7.4


The SAN configurations that use intercluster Metro Mirror and Global Mirror relationships
require the following other switch zoning considerations:
򐂰 For each node in a clustered system, zone exactly two FC ports to exactly two FC ports
from each node in the partner clustered system.
򐂰 If dual-redundant ISLs are available, evenly split the two ports from each node between
the two ISLs. That is, exactly one port from each node must be zoned across each ISL.
򐂰 Local clustered system zoning continues to follow the standard requirement for all ports on
all nodes in a clustered system to be zoned to one another.

Important: Failure to follow these configuration rules exposes the clustered system to
an unwanted condition that can result in the loss of host access to volumes.

If an intercluster link becomes severely and abruptly overloaded, the local FC fabric can
become congested so that no FC ports on the local SVC nodes can perform local
intracluster heartbeat communication. This situation can, in turn, result in the nodes
experiencing lease expiry events. In a lease expiry event, a node reboots to attempt to
reestablish communication with the other nodes in the clustered system. If the leases
for all nodes expire simultaneously, a loss of host access to volumes can occur during
the reboot events.

򐂰 Configure your SAN so that FC traffic can be passed between the two clustered systems.
To configure the SAN this way, you can connect the clustered systems to the same SAN,
merge the SANs, or use routing technologies.
򐂰 Configure zoning to allow all of the nodes in the local fabric to communicate with all of the
nodes in the remote fabric.

򐂰 Optional: Modify the zoning so that the hosts that are visible to the local clustered system
can recognize the remote clustered system. This capability allows a host to have access to
data in the local and remote clustered systems.
򐂰 Verify that clustered system A cannot recognize any of the back-end storage that is owned
by clustered system B. A clustered system cannot access logical units (LUs) that a host or
another clustered system can also access.

Figure 3-5 on page 86 shows the SVC zoning topology.

Chapter 3. Planning and configuration 85


Figure 3-5 SVC zoning topology

Figure 3-6 on page 87 shows an example of the SVC, host, and storage subsystem
connections.

86 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 3-6 Example of SVC, host, and storage subsystem connections

You must also observe the following guidelines:


򐂰 LUNs (MDisks) must have exclusive access to a single SVC clustered system and cannot
be shared between other SVC clustered systems or hosts.
򐂰 A storage controller can present LUNs to the SVC (as MDisks) and to other hosts in the
SAN. However, in this case, it is better to avoid the SVC and hosts that share storage
ports.
򐂰 Mixed port speeds are not permitted for intracluster communication. All node ports within
a clustered system must be running at the same speed.
򐂰 ISLs are not to be used for intracluster node communication or node-to-storage controller
access.
򐂰 The switch configuration in an SVC fabric must comply with the switch manufacturer’s
configuration rules, which can impose restrictions on the switch configuration. For
example, a switch manufacturer might limit the number of supported switches in a SAN.
Operation outside of the switch manufacturer’s rules is not supported.
򐂰 Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host must be
in separate zones. For example, IBM AIX® and Microsoft hosts must be in separate
zones. In this case, “dissimilar” means that the hosts are running separate operating
systems or are using separate hardware platforms. Therefore, various levels of the same
operating system are regarded as similar. This requirement is a SAN interoperability issue,
rather than an SVC requirement.
򐂰 Host zones are to contain only one initiator (HBA) each, and as many SVC node ports as
you need, depending on the high availability and performance that you want from your
configuration.

Chapter 3. Planning and configuration 87


Important: Be aware of the following considerations:
򐂰 The use of ISLs for intracluster node communication can negatively affect the
availability of the system because of the high dependency on the quality of these
links to maintain heartbeat and other system management services. Therefore, we
strongly advise that you use them only as part of an interim configuration to facilitate
SAN migrations, and not as part of the designed solution.
򐂰 The use of ISLs for SVC node to storage controller access can lead to port
congestion, which can negatively affect the performance and resiliency of the SAN.
Therefore, we strongly advise that you use them only as part of an interim
configuration to facilitate SAN migrations, and not as part of the designed solution.
With the SVC 6.3, you can use ISLs between nodes, but they must be in a dedicated
SAN, virtual SAN (CISCO technology), or logical SAN (Brocade technology).
򐂰 The use of mixed port speeds for intercluster communication can lead to port
congestion, which can negatively affect the performance and resiliency of the SAN;
therefore, it is not supported.

You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.

Zoning examples
Figure 3-7 shows an SVC clustered system zoning example.

1 1 1 1 SVC 2 2 2 2 SVC
SVC 1 1 2 3 4 Port #
SVC 2
1 2 3 4 Port #

Fabric ID 21 Fabric ID 22

Fabric Fabric
1 2
ISL

ISL

Ports 0 1 2 3 Ports 0 1 2 3
1 2 1 2 SVC # 1 2 1 2 SVC #

1 1 3 3 Port # SVC 2 2 4 4 Port # SVC


Fabric ID 11 Zoning Info: Fabric ID 12
0 1 2 3 Port # Switch 0 1 2 3 Port # Switch
Cluster zones with all
SVC ports
SVC-Cluster Zone 1: SVC-Cluster Zone 2:
Fabric Domain ID, Port Fabric Domain ID, Port
11,0 - 11,1 - 11,2 - 11,3 12,0 - 12,1 - 12,2 - 12,3

Storwize Family

Figure 3-7 SVC clustered system zoning example

88 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 3-8 shows a storage subsystem zoning example.

1 2 1 2 SVC # 1 2 1 2 SVC #

1 1 3 3 Port # SVC 2 2 4 4 Port # SVC


Fabric ID 21 Fabric ID 22
0 1 2 3 Port # Switch 0 1 2 3 Port # Switch

Fabric Fabric
1 2

ISL

ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9

Fabric ID 11 Fabric ID 12

V1
V2

E1 E2
Storwize Family
EMC
²

SVC-Storwize V7000 Zone 1: Zoning Info: SVC-Storwize V7000 Zone 2:


Fabric Domain ID, Port All storage ports Fabric Domain ID, Port
11,0 - 11,1 - 11,2 - 11,3 – 11,8 and all SVC ports 12,0 - 12,1 - 12,2 - 12,3 – 12,8
SVC-EMC Zone 1: SVC-EMC Zone 2:
11,0 - 11,1 - 11,2 - 11,3 – 11,9 12,0 - 12,1 - 12,2 - 12,3 – 12,9

Figure 3-8 Storage subsystem zoning example

Figure 3-9 shows a host zoning example.

IBM Power System

P1 P2

Fabric ID 21 Fabric ID 22

Fabric Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9 Fabric


1 2
ISL
ISL

Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9

Fabric ID 11 Fabric ID 12
AC AC

DC DC

SVC-Power System Zone P1: Zoning Info: SVC-Power System Zone P2:
Fabric Domain ID, Port One Power System Fabric Domain ID, Port
21,1 - 11,0 - 11,1 Port and one SVC 22,1 - 12,2 - 12,3
Port per SVC Node

Figure 3-9 Host zoning example

Chapter 3. Planning and configuration 89


3.3.3 iSCSI IP addressing plan
Since version 6.3, the SVC supports host access through iSCSI (as an alternative to FC). The
following considerations apply:
򐂰 The SVC uses the built-in Ethernet ports for iSCSI traffic. If the optional 10 Gbps Ethernet
feature is installed, you can connect host systems through the two 10 Gbps Ethernet ports
per node.
򐂰 All node types, which can run SVC 6.1 or later, can use the iSCSI feature.
򐂰 The SVC supports the Challenge Handshake Authentication Protocol (CHAP)
authentication methods for iSCSI.
򐂰 iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This
design reduces the need for multipathing support in the iSCSI host.
򐂰 iSCSI IP addresses can be configured for one or more nodes.
򐂰 iSCSI Simple Name Server (iSNS) addresses can be configured in the SVC.
򐂰 The iSCSI qualified name (IQN) for an SVC node is
iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the
clustered system name and the node name, it is important not to change these names
after iSCSI is deployed.
򐂰 Each node can be given an iSCSI alias, as an alternative to the IQN.
򐂰 The IQN of the host to an SVC host object is added in the same way that you add FC
worldwide port names (WWPNs).
򐂰 Host objects can have WWPNs and IQNs.
򐂰 Standard iSCSI host connection procedures can be used to discover and configure the
SVC as an iSCSI target.

Next, we describe several ways in which you can configure the SVC 6.1 or later.

Figure 3-10 shows the use of IPv4 management and iSCSI addresses in the same subnet.

Figure 3-10 Use of IPv4 addresses

90 Implementing the IBM System Storage SAN Volume Controller V7.4


You can set up the equivalent configuration with only IPv6 addresses.

Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate
subnets.

Figure 3-11 IPv4 address plan with two subnets

Figure 3-12 shows the use of redundant networks.

Figure 3-12 Redundant networks

Figure 3-13 on page 92 shows the use of a redundant network and a third subnet for
management.

Chapter 3. Planning and configuration 91


Figure 3-13 Redundant network with third subnet for management

Figure 3-14 shows the use of a redundant network for iSCSI data and management.

Figure 3-14 Redundant network for iSCSI and management

Be aware of the following considerations:


򐂰 All of these examples are valid for IPv4 and IPv6 addresses.
򐂰 Using IPv4 addresses on one port and IPv6 addresses on the other port is valid.
򐂰 Having separate subnet configurations for IPv4 and IPv6 addresses is valid.

92 Implementing the IBM System Storage SAN Volume Controller V7.4


Storwize 7.4 support for VLAN for iSCSI and IP replication
When the VLAN ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication on the SVC, the appropriate VLAN settings on the Ethernet network
and servers must also be correctly configured to not experience connectivity issues. After the
VLANs are configured, the changes to VLAN settings disrupt iSCSI and IP replication traffic
to and from the SVC.

Important: During the individual VLAN configuration for each IP address, if the VLAN
settings for the local and failover ports on two nodes of an I/O Group differ, the switches
must be configured so that failover VLANs are configured on the local switch ports, too.
Then, the failover of IP addresses from the failing node to the surviving node succeeds. If
this configuration is not done, paths are lost to the SVC storage during a node failure.

3.3.4 IP Mirroring
One of the most important new functions of version 7.2 in the Storwize family is IP replication,
which enables the use of lower-cost Ethernet connections for remote mirroring. The capability
is available as a chargeable option (Metro or Global Mirror) on all Storwize family systems.
The new function is transparent to servers and applications in the same way that traditional
FC-based mirroring is transparent. All remote mirroring modes (Metro Mirror, Global Mirror,
and Global Mirror with changed volumes) are supported.

The configuration of the system is straightforward. Storwize family systems normally can find
each other in the network and can be selected from the GUI. IP replication includes
Bridgeworks SANSlide network optimization technology and is available at no additional
charge. Remote mirror is a chargeable option but the price does not change with IP
replication. Existing remote mirror users have access to the new function at no additional
charge.

IP connections that are used for replication can have a long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many hops between
switches and other appliances in the network. Traditional replication solutions transmit data,
wait for a response, and then transmit more data, which can result in network usage as low as
20% (based on IBM measurements). This situation gets worse as the latency gets longer.

Bridgeworks SANSlide technology that is integrated with the IBM Storwize family requires no
separate appliances; therefore, no other costs and configuration are necessary. It uses AI
technology to transmit multiple data streams in parallel and adjusts automatically to changing
network environments and workloads.

Because SANSlide does not use compression, it is independent of application or data type.
Most importantly, SANSlide improves network bandwidth usage up to 3x, so clients might be
able to deploy a less costly network infrastructure or use faster data transfer to speed
replication cycles, improve remote data currency, and recover more quickly.

Note: The limiting factor of the distance is the round-trip time. The maximum supported
round-trip time between sites is 80 milliseconds (ms) for a 1 Gbps link. For a 10 Gbps link,
the maximum supported round-trip time between sites is 10 ms.

Key features of IP mirroring


IBM offers the new, enhanced function of IP-based Remote Copy services, which are
primarily targeted to small and midrange environments, where clients typically cannot afford
the cost of FC-based replication between sites.

Chapter 3. Planning and configuration 93


IP-based replication offers the following new features:
򐂰 Remote Copy modes support Metro Mirror, Global Mirror, and Global Mirror with Change
Volumes.
򐂰 All platforms that support Remote Copy are supported.
򐂰 The configuration uses automatic path configuration through the discovery of a remote
cluster. You can configure any Ethernet port (10g/1G) for replication that uses Remote
Copy port groups.
򐂰 Dedicated Ethernet ports for replication.
򐂰 CHAP-based authentication is supported.
򐂰 Licensing is the same as the existing Remote Copy.
򐂰 High availability features auto-failover support across redundant links.
򐂰 Performance is based on a vendor-supplied IP connectivity solution, which has experience
in offering low bandwidth, high latency long-distance IP links. Support is for 80 msec
round-trip time at a 1 Gbps link.

Figure 3-15 shows the schematic way to connect two sides through IP mirroring.

Figure 3-15 IP mirroring

Figure 3-16 on page 95 and Figure 3-17 on page 95 show configuration possibilities for
connecting two sites through IP mirroring. Figure 3-16 on page 95 shows the configuration
with single links.

94 Implementing the IBM System Storage SAN Volume Controller V7.4


Figure 3-16 Single link configuration

The administrator must configure at least one port on each site to use with the link.
Configuring more than one port means that replication continues, even if a node fails.
Figure 3-17 shows a redundant IP configuration with two links.

Figure 3-17 Two links with active and failover ports

As shown in Figure 3-17, the following replication group setup for dual redundant links is
used:
򐂰 Replication Group 1: Four IP addresses, each on a different node (green)
򐂰 Replication Group 2: Four IP addresses, each on a different node (orange)

The following simultaneous IP replication sessions can be used at any time:


򐂰 Possible user configuration of each Ethernet port:
– Not used for IP replication (default)
– Used for IP replication, link 1
– Used for IP replication, link 2

Chapter 3. Planning and configuration 95


򐂰 IP replication status for each Ethernet port:
– Not used for IP replication
– Active (solid box)
– Standby (outline box)

Figure 3-18 shows the configuration of an IP partnership. You can obtain the requirements to
set up an IP partnership at this weblink:
https://ibm.biz/BdEpPB

Figure 3-18 IP partnership configuration

Terminology for IP replication


This section lists the following terminology for IP replication:
򐂰 Discovery
This term refers to the process by which two SVC clusters exchange information about
their IP address configuration. For IP-based partnerships, only IP addresses that are
configured for Remote Copy are discovered. For example, the first discovery occurs and
then the user runs the mkippartnership command in CLI. Subsequent discoveries might
occur as a result of user activities (configuration changes) or as a result of hardware
failures (for example, node failure and port failure).

96 Implementing the IBM System Storage SAN Volume Controller V7.4


򐂰 Remote Copy port group
This term indicates the settings of local and remote Ethernet ports (on local and partnered
SVC systems) that can access each other through a long-distance IP link. For a
successful partnership to be established between two SVC clusters, at least two ports
must be in the same Remote Copy port group, one from the local cluster and one from the
partner cluster. More than two ports from the same system in a group can exist to allow for
TCP connection failover in a local and partnered node or port failure.
򐂰 Remote Copy port group ID
This numeric value indicates to which group the port belongs. Zero is used to indicate that
a port is not used for Remote Copy. For two SVC clusters to form a partnership, both
clusters must have at least one port that is configured with the same group ID and they
must be accessible to each other.
򐂰 RC logic
RC logic is a bidirectional full-duplex data path between two SVC clusters that are Remote
Copy partners. This path is between an IP address pair, one local and one remote. An RC
login carries Remote Copy traffic that consists of host WRITEs, background copy traffic
during initial sync within a relationship, periodic updates in Global Mirror with changed
volumes relationships, and so on.
򐂰 Path configuration
Path configuration is the act of setting up RC logins between two partnered SVC systems.
The selection of IP addresses to be used for RC logins is based on certain rules that are
specified in the Preferred practices section. Most of those rules are driven by constraints
and requirements from a vendor-supplied link management library. A simple algorithm is
run by each SVC system to arrive at the list of RC logins that must be established. Local
and remote SVC clusters are expected to arrive at the same IP address pairs for RC login
creation, even though they run the algorithm independently.

Preferred practices
The following preferred practices are suggested for IP replication:
򐂰 Configure two physical links between sites for redundancy.
򐂰 Configure Ethernet ports that are dedicated for Remote Copy. Do not allow iSCSI host
attach for these Ethernet ports.
򐂰 Configure remote copy port group IDs on both nodes for each physical link to survive node
failover.
򐂰 A minimum of four nodes are required for dual redundant links to work across node
failures. If a node failure occurs on a two-node system, one link is lost.
򐂰 Do not zone in two SVC systems over FC/FCOE when an IP partnership exists.
򐂰 Configure CHAP secret-based authentication, if required.
򐂰 The maximum supported round-trip time between sites is 80 ms for a 1 Gbps link.
򐂰 The maximum supported round-trip time between sites is 10 ms for a 10 Gbps link.
򐂰 For IP partnerships, the recommended method of copying is Global Mirror with changed
volumes because of the performance benefits. Also, Global Mirror and Metro Mirror might
be more susceptible to the loss of synchronization.
򐂰 The amount of inter-cluster heartbeat traffic is 1 Mbps per link.
򐂰 The minimum bandwidth requirement for the inter-cluster link is 10 Mbps. However, this
bandwidth scales up with the amount of host I/O that you choose to use.

Chapter 3. Planning and configuration 97


For more information, see IBM SAN Volume Controller and Storwize Family Native IP
Replication, REDP-5103:
http://www.redbooks.ibm.com/abstracts/redp5103.html?Open

3.3.5 Back-end storage subsystem configuration


Back-end storage subsystem configuration planning must be applied to all storage controllers
that are attached to the SVC.

For more information about supported storage subsystems, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658

Apply the following general guidelines for back-end storage subsystem configuration
planning:
򐂰 In the SAN, storage controllers that are used by the SVC clustered system must be
connected through SAN switches. Direct connection between the SVC and the storage
controller is not supported.
򐂰 Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is a preferred
practice. Therefore, canister A in the V3700 subsystem can be connected to SAN A only,
or to SAN A and SAN B. Also, canister B in the V3700 subsystem can be connected to
SAN B only, or to SAN B and SAN A.
򐂰 Split-controller configurations are supported by certain rules and configuration guidelines.
For more information, see 3.3.7, “Stretched cluster system configuration” on page 101.
򐂰 All SVC nodes in an SVC clustered system must see the same set of ports from each
storage subsystem controller. Violating this guideline causes the paths to become
degraded. This degradation can occur as a result of applying inappropriate zoning and
LUN masking. This guideline has important implications for a disk subsystem, such as
DS3000, V3700, V5000, or V7000, which imposes exclusivity rules regarding to which
HBA WWPNs a storage partition can be mapped.

MDisks within storage pools: SVC 6.1 and later provide for better load distribution
across paths within storage pools.

In previous code levels, the path to MDisk assignment was made in a round-robin fashion
across all MDisks that are configured to the clustered system. With that method, no
attention is paid to how MDisks within storage pools are distributed across paths.
Therefore, it is possible and even likely that certain paths are more heavily loaded than
others.

This condition is more likely to occur with a smaller number of MDisks in the storage pool.
Starting with SVC 6.1, the code contains logic that considers MDisks within storage pools.
Therefore, the code more effectively distributes their active paths that are based on the
storage controller ports that are available.

The Detect MDisk commands must be run following the creation or modification (addition
of or removal of MDisk) of storage pools for paths to be redistributed.

98 Implementing the IBM System Storage SAN Volume Controller V7.4


If you do not have a storage subsystem that supports the SVC round-robin algorithm, ensure
that the number of MDisks per storage pool is a multiple of the number of storage ports that
are available. This approach ensures sufficient bandwidth to the storage controller and an
even balance across storage controller ports.

In general, configure disk subsystems as though no SVC exists. However, we suggest the
following specific guidelines:
򐂰 Disk drives:
– Exercise caution with large disk drives so that you do not have too few spindles to
handle the load.
– RAID 5 is suggested for most workloads.
򐂰 Array sizes:
– An array size of 8+P or 4+P is suggested for the IBM DS4000® and DS5000™
families, if possible.
– Use the DS4000 segment size of 128 KB or larger to help the sequential performance.
– Upgrade to EXP810 drawers, if possible.
– Create LUN sizes that are equal to the RAID array and rank size. If the array size is
greater than 2 TB and the disk subsystem does not support MDisks that are larger than
2 TB, create the minimum number of LUNs of equal size.
– An array size of 7+P is suggested for the V3700, V5000, and V7000 Storwize families.
– When you are adding more disks to a subsystem, consider adding the new MDisks to
existing storage pools versus creating more small storage pools.
– Auto balancing was introduced in version 7.3 to restripe volume extents evenly across
all MDisks in the storage pools.
– A maximum of 1,024 worldwide node names (WWNNs) are available per cluster:
• EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port.
Each WWNN appears as a separate controller to the SVC.
• IBM, EMC CLARiiON, and HP use one WWNN per subsystem. Each WWNN
appears as a single controller with multiple ports/worldwide port names (WWPNs),
for a maximum of 16 ports/WWPNs per WWNN.
򐂰 DS8000 that uses four or eight of the four-port HA cards:
– Use ports 1 and 3 or ports 2 and 4 on each card. (It does not matter for 8 Gb cards.)
This setup provides eight or 16 ports for the SVC use.
– Use eight ports minimum, up to 40 ranks.
– Use 16 ports for 40 or more ranks. Sixteen is the maximum number of ports.
򐂰 DS4000/DS5000 (EMC CLARiiON/CX):
– Both systems have the preferred controller architecture, and the SVC supports this
configuration.
– Use a minimum of four ports, and preferably eight or more ports, up to a maximum of
16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC.
– Support is available for mapping controller A ports to Fabric A and controller B ports to
Fabric B or cross-connecting ports to both fabrics from both controllers. The
cross-connecting approach is preferred to avoid Automatic Volume Transfer
(AVT)/Trespass from occurring if a fabric fails or all paths to a fabric fail.

Chapter 3. Planning and configuration 99


򐂰 DS3400 subsystems: Use a minimum of four ports.
򐂰 Storwize family: Use a minimum of four ports, and preferably eight ports.
򐂰 IBM XIV requirements and restrictions:
– The use of XIV extended functions, including snaps, thin provisioning, synchronous
replication (native copy services), and LUN expansion of LUNs that are presented to
the SVC, is not supported.
– A maximum of 511 LUNs from one XIV system can be mapped to an SVC clustered
system.
򐂰 Full 15 module XIV recommendations (161 TB usable):
– Use two interface host ports from each of the six interface modules.
– Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC
node ports.
– Create 48 LUNs of equal size, each of which is a multiple of 17 GB. This configuration
creates approximately 1632 GB if you are using the entire full frame XIV with the SVC.
– Map LUNs to the SVC as 48 MDisks and add all of them to the single XIV storage pool
so that the SVC drives the I/O to four MDisks and LUNs for each of the 12 XIV FC
ports. This design provides a good queue depth on the SVC to drive XIV adequately.
򐂰 Six module XIV recommendations (55 TB usable):
– Use two interface host ports from each of the two active interface modules.
– Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.)
Also, zone these four ports with all SVC node ports.
– Create 16 LUNs of equal size, each of which is a multiple of 17 GB. This configuration
creates approximately 1632 GB if you are using the entire XIV with the SVC.
– Map the LUNs to the SVC as 16 MDisks and add all of them to the single XIV storage
pool so that the SVC drives I/O to four MDisks and LUNs per each of the four XIV FC
ports. This design provides a good queue depth on the SVC to drive the XIV
adequately.
򐂰 Nine module XIV recommendations (87 TB usable):
– Use two interface host ports from each of the four active interface modules.
– Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are
inactive.) Also, zone these eight ports with all of the SVC node ports.
– Create 26 LUNs of equal size, each of which is a multiple of 17 GB. This design
creates approximately 1632 GB approximately if you are using the entire XIV with the
SVC.
– Map the LUNs to the SVC as 26 MDisks and map all of them to the single XIV storage
pool so that the SVC drives I/O to three MDisks and LUNs on each of the six ports and
four MDisks and LUNs on the other two XIV FC ports. This design provides a useful
queue depth on the SVC to drive the XIV adequately.
򐂰 Configure XIV host connectivity for the SVC clustered system:
– Create one host definition on the XIV, and include all SVC node WWPNs.
– You can create clustered system host definitions (one per I/O Group), but the
preceding method is easier to configure.
– Map all LUNs to all SVC node WWPNs.

100 Implementing the IBM System Storage SAN Volume Controller V7.4
3.3.6 SAN Volume Controller clustered system configuration
To ensure high availability in SVC installations, consider the following guidelines when you
design a SAN with the SVC:
򐂰 All nodes in a clustered system must be in the same LAN segment because the nodes in
the clustered system must assume the same clustered system or service IP address.
Ensure that the network configuration allows any of the nodes to use these IP addresses.
If you plan to use the second Ethernet port on each node, two LAN segments can be
used. However, port 1 of every node must be in one LAN segment, and port 2 of every
node must be in the other LAN segment.
򐂰 To maintain application uptime in the unlikely event of an individual SVC node failing, SVC
nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but it is still a valid
configuration. The remaining node operates in write-through mode, which means that the
data is written directly to the disk subsystem. (The cache is disabled for the write.)
򐂰 The uninterruptible power supply unit must be in the same rack as the node to which it
provides power, and each uninterruptible power supply unit can have only one connected
node.
򐂰 The FC SAN connections between the SVC node and the switches are optical fiber. These
connections can run at 2 Gbps, 4 Gbps, 8 Gbps, or 16 Gbps (DH8), depending on your
SVC and switch hardware. The 2145-CG8, 2145-CF8, 2145-8A4, 2145-8G4, and
2145-DH8 SVC nodes auto-negotiate the connection speed with the switch.
򐂰 The SVC node ports must be connected to the FC fabric only. Direct connections between
the SVC and the host, or the disk subsystem, are unsupported.
򐂰 Two SVC clustered systems cannot have access to the same LUNs within a disk
subsystem. Configuring zoning so that two SVC clustered systems have access to the
same LUNs (MDisks) will likely result in data corruption.
򐂰 The two nodes within an I/O Group can be co-located (within the same set of racks) or can
be in separate racks and separate rooms. For more information, see 3.3.7, “Stretched
cluster system configuration” on page 101.
򐂰 The SVC uses three MDisks as quorum disks for the clustered system. A preferred
practice for redundancy is to have each quorum disk in a separate storage subsystem,
where possible. The current locations of the quorum disks can be displayed by using the
lsquorum command and relocated by using the chquorum command.

3.3.7 Stretched cluster system configuration


You can implement a stretched-cluster system configuration, which is also referred to as a
Split I/O Group, as a high-availability option.
The SVC 7.4 supports the following stretched-cluster system configurations:
򐂰 No ISL configuration:
– Passive wave division multiplexing (WDM) devices can be used between both sites.
– No ISLs can be located between the SVC nodes (which is similar to SVC
5.1-supported configurations).
– The supported distance is up to 40 km (24.8 miles).
Figure 3-19 on page 102 shows an example of a stretched-cluster configuration with no
ISL configuration.

Chapter 3. Planning and configuration 101


Figure 3-19 Stretched cluster with no ISL configuration

򐂰 ISL configuration:
– ISLs are located between the SVC nodes.
– Maximum distance is similar to Metro Mirror distances.
– Physical requirements are similar to Metro Mirror requirements.
– ISL distance extension with active and passive WDM devices is supported.
Figure 3-20 shows an example of a split cluster with ISL configuration.

Figure 3-20 Split cluster with ISL configuration

102 Implementing the IBM System Storage SAN Volume Controller V7.4
Use the stretched-cluster system configuration with the volume mirroring option to realize an
availability benefit. After volume mirroring is configured, use the
lscontrollerdependentvdisks command to validate that the volume mirrors are on separate
storage controllers. Having the volume mirrors on separate storage controllers ensures that
access to the volumes is maintained if a storage controller is lost.

When you are implementing a stretched-cluster system configuration, two of the three
quorum disks can be co-located in the same room where the SVC nodes are located.
However, the active quorum disk must be in a separate room. This configuration ensures that
a quorum disk is always available, even after a single-site failure.

For stretched-cluster system configuration, configure the SVC in the following manner:
򐂰 Site 1: Half of the SVC clustered system nodes and one quorum disk candidate
򐂰 Site 2: Half of the SVC clustered system nodes and one quorum disk candidate
򐂰 Site 3: Active quorum disk

When a stretched-cluster configuration is used with volume mirroring, this configuration


provides a high-availability solution that is tolerant of a failure at a single site. If the primary or
secondary site fails, the remaining sites can continue performing I/O operations.

For more information about stretched-cluster configurations, see Appendix C, “SAN Volume
Controller stretched cluster” on page 903.

For more information, see IBM SAN Volume Controller Enhanced Stretched Cluster with
VMware, SG24-8211:
http://www.redbooks.ibm.com/abstracts/sg248211.html?Open

3.3.8 Storage pool configuration


The storage pool is at the center of the many-to-many relationship between the MDisks and
the volumes. It acts as a container from which managed disks contribute chunks of physical
disk capacity that are known as extents and from which volumes are created.

MDisks in the SVC are LUNs that are assigned from the underlying disk subsystems to the
SVC and can be managed or unmanaged. A managed MDisk is an MDisk that is assigned to
a storage pool. Consider the following points:
򐂰 A storage pool is a collection of MDisks. An MDisk can be contained only within a single
storage pool.
򐂰 An SVC supports up to 128 storage pools.
򐂰 The number of volumes that can be allocated from a storage pool is unlimited; however, an
I/O Group is limited to 2,048, and the clustered system limit is 8,192.
򐂰 Volumes are associated with a single storage pool, except in cases where a volume is
being migrated or mirrored between storage pools.

The SVC supports extent sizes of 16 MiB, 32 MiB, 64 MiB, 128 MiB, 256 MiB, 512 MiB, 1024
MiB, 2048 MiB, 4096 MiB, and 8192 MiB. Support for extent sizes 4096 MiB and 8192 MiB
was added in SVC 6.1. The extent size is a property of the storage pool and is set when the
storage pool is created. All MDisks in the storage pool have the same extent size, and all
volumes that are allocated from the storage pool have the same extent size. The extent size
of a storage pool cannot be changed. If you want another extent size, the storage pool must
be deleted and a new storage pool configured.

Table 3-2 on page 104 lists all of the available extent sizes in an SVC.

Chapter 3. Planning and configuration 103


Table 3-2 Extent size and total storage capacities per system
Extent size (MiB) Total storage capacity manageable per system

16 64 TiB

32 128 TiB

64 256 TiB

128 512 TiB

256 1 PiB

512 2 PiB

1024 4 PiB

2048 8 PiB

4096 16 PiB

8192 32 PiB

Consider the following information about storage pools:


򐂰 Maximum clustered system capacity is related to the extent size:
– 16 MiB extent = 64 TiB and doubles for each increment in extent size, for example,
32 MiB = 128 TiB. We strongly advise 128 MiB - 256 MiB. The IBM Storage
Performance Council (SPC) benchmarks used a 256 MiB extent.
– Pick the extent size and then use that size for all storage pools.
– You cannot migrate volumes between storage pools with separate extent sizes.
However, you can use volume mirroring to create copies between storage pools with
separate extent sizes.
򐂰 Storage pool reliability, availability, and serviceability (RAS) considerations:
– It might make sense to create multiple storage pools; however, you must ensure that a
host only gets volumes that are built from one storage pool. If the storage pool goes
offline, it affects only a subset of all the hosts that make up the SVC. However, creating
multiple storage pools can cause a high number of storage pools, which can approach
the SVC limits.
– If you do not isolate hosts to storage pools, create one large storage pool. Creating one
large storage pool assumes that the physical disks are all the same size, speed, and
RAID level.
– The storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data
on it. Do not put MDisks into a storage pool until they are needed.
– Create at least one separate storage pool for all the image mode volumes.
– Ensure that the all host-persistent reserves are removed from LUNs that are given to
the SVC.
򐂰 Storage pool performance considerations
It might make sense to create multiple storage pools if you are attempting to isolate
workloads to separate disk spindles. Storage pools with too few MDisks cause an MDisk
overload, so having more spindle counts in a storage pool is better to meet workload
requirements.

104 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 The storage pool and SVC cache relationship
The SVC employs cache partitioning to limit the potentially negative effect that a poorly
performing storage controller can have on the clustered system. The partition allocation
size is based on the number of configured storage pools. This design protects against
individual controller overloading and failures from using write cache and degrading the
performance of the other storage pools in the clustered system. For more information, see
2.8.3, “Cache” on page 46.
Table 3-3 shows the limit of the write-cache data.
Table 3-3 Limit of the cache data
Number of storage pools Upper limit

1 100%

2 66%

3 40%

4 30%

5 or more 25%

Consider the rule that no single partition can occupy more than its upper limit of cache
capacity with write data. These limits are upper limits, and they are the points at which the
SVC cache starts to limit incoming I/O rates for volumes that are created from the storage
pool. If a particular partition reaches this upper limit, the net result is the same as a global
cache resource that is full. That is, the host writes are serviced on a one-out-one-in basis
because the cache destages writes to the back-end disks.
However, only writes that are targeted at the full partition are limited. All I/O that is
destined for other (non-limited) storage pools continues as normal. The read I/O requests
for the limited partition also continue normally. However, because the SVC is destaging
write data at a rate that is greater than the controller can sustain (otherwise, the partition
does not reach the upper limit), read response times are also likely affected.

3.3.9 Volume configuration


An individual volume is a member of one storage pool and one I/O Group. When a volume is
created, you first identify the performance that you want, availability, and cost requirements
for that volume, and then select the storage pool accordingly.

The storage pool defines which MDisks that are provided by the disk subsystem make up the
volume. The I/O Group, which is made up of two nodes, defines which SVC nodes provide I/O
access to the volume.

Important: No fixed relationship exists between I/O Groups and storage pools.

Perform volume allocation that is based on the following considerations:


򐂰 Optimize performance between the hosts and the SVC by attempting to distribute volumes
evenly across available I/O Groups and nodes within the clustered system.
򐂰 Reach the level of performance, reliability, and capacity that you require by using the
storage pool that corresponds to your needs. (You can access any storage pool from any
node.) That is, choose the storage pool that fulfills the demands for your volumes
regarding performance, reliability, and capacity.

Chapter 3. Planning and configuration 105


򐂰 I/O Group considerations:
– When you create a volume, it is associated with one node of an I/O Group. By default,
when you create a volume, it is associated with the next node by using a round-robin
algorithm. You can specify a preferred access node, which is the node through which
you send I/O to the volume instead of by using the round-robin algorithm. A volume is
defined for an I/O Group.
– Even if you have eight paths for each volume, all I/O traffic flows only toward one node
(the preferred node). Therefore, only four paths are used by the IBM Subsystem Device
Driver (SDD). The other four paths are used only in a failure of the preferred node or
when a concurrent code upgrade is running.
򐂰 Creating image mode volumes:
– Use image mode volumes when an MDisk already has data on it from a non-virtualized
disk subsystem. When an image mode volume is created, it directly corresponds to the
MDisk from which it is created. Therefore, volume logical block address (LBA) x =
MDisk LBA x. The capacity of image mode volumes defaults to the capacity of the
supplied MDisk.
– When you create an image mode disk, the MDisk must have a mode of unmanaged,
which does not belong to any storage pool. (A capacity of 0 is not allowed.) Image
mode volumes can be created in sizes with a minimum granularity of 512 bytes, and
they must be at least one block (512 bytes) in size.
򐂰 Creating managed mode volumes with sequential or striped policy
When a managed mode volume with sequential or striped policy is created, you must use
a number of MDisks that contain extents that are free and equal to or greater than the size
of the volume that you want to create. Sufficient extents might be available on the MDisk,
but a contiguous block that is large enough to satisfy the request might not be available.
򐂰 Thin-provisioned volume considerations:
– When the thin-provisioned volume is created, you must understand the utilization
patterns by the applications or group users that are accessing this volume. You also
must consider the actual size of the data, the rate of data creation, and modifying or
deleting existing data.
– The following operating modes for thin-provisioned volumes are available:
• Autoexpand volumes allocate storage from a storage pool on demand with minimal
required user intervention. However, a misbehaving application can cause a volume
to expand until it uses all of the storage in a storage pool.
• Non-autoexpand volumes have a fixed amount of assigned storage. In this case, the
user must monitor the volume and assign more capacity when required. A
misbehaving application can cause only the volume that it uses to fill up.
– Depending on the initial size for the real capacity, the grain size and a warning level can
be set. If a volume goes offline through a lack of available physical storage for
autoexpand or because a volume that is marked as non-autoexpand was not expanded
in time, a danger exists of data being left in the cache until storage is made available.
This situation is not a data integrity or data loss issue, but you must not rely on the SVC
cache as a backup storage mechanism.

106 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: Keep a warning level on the used capacity so that it provides adequate
time to respond and provision more physical capacity.

Warnings must not be ignored by an administrator.

Use the autoexpand feature of the thin-provisioned volumes.

– When you create a thin-provisioned volume, you can choose the grain size for
allocating space in 32 KiB, 64 KiB, 128 KiB, or 256 KiB chunks. The grain size that you
select affects the maximum virtual capacity for the thin-provisioned volume. The default
grain size is 256 KiB, which is the recommended option. If you select 32 KiB for the
grain size, the volume size cannot exceed 260,000 GiB.
The grain size cannot be changed after the thin-provisioned volume is created.
Generally, smaller grain sizes save space but require more metadata access, which
can adversely affect performance. If you are not going to use the thin-provisioned
volume as a FlashCopy source or target volume, use 256 KiB to maximize
performance. If you are going to use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the
FlashCopy function.
– Thin-provisioned volumes require more I/Os because of directory accesses. For truly
random workloads with 70% read and 30% write, a thin-provisioned volume requires
approximately one directory I/O for every user I/O.
– The directory is two-way write-back-cached (as with the SVC fastwrite cache), so
certain applications perform better.
– Thin-provisioned volumes require more processor processing, so the performance per
I/O Group can also be reduced.
– A thin-provisioned volume feature that is called zero detect provides clients with the
ability to reclaim unused allocated disk space (zeros) when they are converting a fully
allocated volume to a thin-provisioned volume by using volume mirroring.
򐂰 Volume mirroring guidelines:
– Create or identify two separate storage pools to allocate space for your mirrored
volume.
– Allocate the storage pools that contain the mirrors from separate storage controllers.
– If possible, use a storage pool with MDisks that share characteristics. Otherwise, the
volume performance can be affected by the poorer performing MDisk.

3.3.10 Host mapping (LUN masking)


For the host and application servers, the following guidelines apply:
򐂰 Each SVC node presents a volume to the SAN through four ports. Because two nodes are
used in normal operations to provide redundant paths to the same storage, a host with two
HBAs can see multiple paths to each LUN that is presented by the SVC. Use zoning to
limit the pathing from a minimum of two paths to the available maximum of eight paths,
depending on the kind of high availability and performance that you want in your
configuration.

Chapter 3. Planning and configuration 107


It is best to use zoning to limit the pathing to four paths. The hosts must run a multipathing
device driver to limit the pathing back to a single device. The multipathing driver that is
supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native
multipath I/O (MPIO) drivers on selected hosts are supported. For more operating
system-specific information about MPIO support, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004946
The actual version of the Subsystem Device Driver Device Specific Module (SDDDSM) for
IBM products is available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000350
򐂰 The number of paths to a volume from a host to the nodes in the I/O Group that owns the
volume must not exceed eight, even if eight is not the maximum number of paths that are
supported by the multipath driver (SDD supports up to 32). To restrict the number of paths
to a host volume, the fabrics must be zoned so that each host FC port is zoned to no more
than two ports from each SVC node in the I/O Group that owns the volume.

Multipathing: We suggest the following number of paths per volume (n+1 redundancy):
򐂰 With two HBA ports, zone the HBA ports to the SVC ports 1 - 2 for a total of four
paths.
򐂰 With four HBA ports, zone the HBA ports to the SVC ports 1 - 1 for a total of four
paths.
򐂰 Optional (n+2 redundancy): With four HBA ports, zone the HBA ports to the SVC
ports 1 - 2 for a total of eight paths.

We use the term HBA port to describe the SCSI Initiator. We use the term SAN Volume
Controller port to describe the SCSI target.

The maximum number of host paths per volume must not exceed eight.

򐂰 If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports
to maximize high availability and performance.
򐂰 To configure greater than 256 hosts, you must configure the host to I/O Group mappings
on the SVC. Each I/O Group can contain a maximum of 256 hosts, so it is possible to
create 1,024 host objects on an eight-node SVC clustered system. Volumes can be
mapped only to a host that is associated with the I/O Group to which the volume belongs.
򐂰 Port masking
You can use a port mask to control the node target ports that a host can access, which
satisfies the following requirements:
– As part of a security policy to limit the set of WWPNs that can obtain access to any
volumes through an SVC port
– As part of a scheme to limit the number of logins with mapped volumes visible to a host
multipathing driver, such as SDD, and therefore limit the number of host objects that
are configured without resorting to switch zoning
򐂰 The port mask is an optional parameter of the mkhost and chhost commands. The port
mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all
ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value
is 1111 (all ports enabled).

108 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 The SVC supports connection to the Cisco MDS family and Brocade family. For more
information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004946

3.3.11 Advanced Copy Services


The SVC offers the following Advanced Copy Services:
򐂰 FlashCopy
򐂰 Metro Mirror
򐂰 Global Mirror

Layers: SVC 6.3 introduced a new property that is called layer for the clustered system.
This property is used when a copy services partnership exists between an SVC and an
IBM Storwize V7000. There are two layers: replication and storage. All SVC clustered
systems are replication layers and cannot be changed. By default, the IBM Storwize V7000
is a storage layer, which must be changed by using CLI command chsystem before you use
it to make any copy services partnership with the SVC.

Figure 3-21 shows an example for replication and storage layer.

Figure 3-21 Replication and storage layer

SVC Advanced Copy Services must apply the guidelines that are described next.

FlashCopy guidelines
Consider the following FlashCopy guidelines:
򐂰 Identify each application that must have a FlashCopy function that is implemented for its
volume.
򐂰 FlashCopy is a relationship between volumes. Those volumes can belong to separate
storage pools and separate storage subsystems.
򐂰 You can use FlashCopy for backup purposes by interacting with the Tivoli Storage
Manager Agent, or for cloning a particular environment.

Chapter 3. Planning and configuration 109


򐂰 Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned,
or Incremental.
򐂰 Define which FlashCopy rate best fits your requirement in terms of the performance and
the amount of time to complete the FlashCopy. Table 3-4 shows the relationship of the
background copy rate value to the attempted number of grains to be split per second.
򐂰 Define the grain size that you want to use. A grain is the unit of data that is represented by
a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer
FlashCopy elapsed time and a higher space usage in the FlashCopy target volume.
Smaller grain sizes can have the opposite effect. The data structure and the source data
location can modify those effects.
In an actual environment, check the results of your FlashCopy procedure in terms of the
data that is copied at every run and in terms of elapsed time, comparing them to the new
SVC FlashCopy results. Eventually, adapt the grain per second and the copy rate
parameter to fit your environment’s requirements. See Table 3-4.

Table 3-4 Grain splits per second


User percentage Data copied per 256 KiB grain per 64 KiB grain per
second second second

1 - 10 128 KiB 0.5 2

11 - 20 256 KiB 1 4

21 - 30 512 KiB 2 8

31 - 40 1 MiB 4 16

41 - 50 2 MiB 8 32

51 - 60 4 MiB 16 64

61 - 70 8 MiB 32 128

71 - 80 16 MiB 64 256

81 - 90 32 MiB 128 512

91 - 100 64 MiB 256 1024

Metro Mirror and Global Mirror guidelines


SVC supports intracluster and intercluster Metro Mirror and Global Mirror. From the
intracluster point of view, any single clustered system is a reasonable candidate for a Metro
Mirror or Global Mirror operation. However, the intercluster operation needs at least two
clustered systems that are separated by a number of moderately high-bandwidth links.

Figure 3-22 on page 111 shows a schematic of Metro Mirror connections.

110 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 3-22 Metro Mirror connections

Figure 3-22 contains two redundant fabrics. Part of each fabric exists at the local clustered
system and at the remote clustered system. No direct connection exists between the two
fabrics.

Technologies for extending the distance between two SVC clustered systems can be broadly
divided into the following categories:
򐂰 FC extenders
򐂰 SAN multiprotocol routers

Because of the more complex interactions that are involved, IBM explicitly tests products of
this class for interoperability with the SVC. For more information about the current list of
supported SAN routers in the supported hardware list, see this website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004946

IBM tested a number of FC extenders and SAN router technologies with the SVC. You must
plan, install, and test FC extenders and SAN router technologies with the SVC so that the
following requirements are met:
򐂰 The round-trip latency between sites must not exceed 80 ms (40 ms one way). For Global
Mirror, this limit allows a distance between the primary and secondary sites of up to
8000 km (4970.96 miles) by using a planning assumption of 100 km (62.13 miles) per
1 ms of round-trip link latency.
򐂰 The latency of long-distance links depends on the technology that is used to implement
them. A point-to-point dark fiber-based link often provides a round-trip latency of 1 ms per
100 km (62.13 miles) or better. Other technologies provide longer round-trip latencies,
which affect the maximum supported distance.
򐂰 The configuration must be tested with the expected peak workloads.
򐂰 When Metro Mirror or Global Mirror is used, a certain amount of bandwidth is required for
the IBM SVC intercluster heartbeat traffic. The amount of traffic depends on how many
nodes are in each of the two clustered systems.
Figure 3-23 on page 112 shows the amount of heartbeat traffic, in megabits per second,
that is generated by various sizes of clustered systems.

Chapter 3. Planning and configuration 111


Figure 3-23 Amount of heartbeat traffic

򐂰 These numbers represent the total traffic between the two clustered systems when no I/O
is taking place to mirrored volumes. Half of the data is sent by one clustered system, and
half of the data is sent by the other clustered system. The traffic is divided evenly over all
available intercluster links. Therefore, if you have two redundant links, half of this traffic is
sent over each link during fault-free operation.
򐂰 The bandwidth between sites must, at the least, be sized to meet the peak workload
requirements, in addition to maintaining the maximum latency that was specified
previously. You must evaluate the peak workload requirement by considering the average
write workload over a period of 1 minute or less, plus the required synchronization copy
bandwidth.
With no active synchronization copies and no write I/O disks in Metro Mirror or Global
Mirror relationships, the SVC protocols operate with the bandwidth that is indicated in
Figure 3-23. However, you can determine the true bandwidth that is required for the link
only by considering the peak write bandwidth to volumes that are participating in Metro
Mirror or Global Mirror relationships and adding it to the peak synchronization copy
bandwidth.
򐂰 If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency statements continue to
be true, even during single failure conditions.
򐂰 The configuration is tested to simulate the failure of the primary site (to test the recovery
capabilities and procedures), including eventual failback to the primary site from the
secondary.
򐂰 The configuration must be tested to confirm that any failover mechanisms in the
intercluster links interoperate satisfactorily with the SVC.
򐂰 The FC extender must be treated as a normal link.
򐂰 The bandwidth and latency measurements must be made by, or on behalf of, the client.
They are not part of the standard installation of the SVC by IBM. Make these
measurements during installation and record the measurements. Testing must be
repeated following any significant changes to the equipment that provides the intercluster
link.

Global Mirror guidelines


Consider the following guidelines:
򐂰 When SVC Global Mirror is used, all components in the SAN must sustain the workload
that is generated by application hosts and the Global Mirror background copy workload.
Otherwise, Global Mirror can automatically stop your relationships to protect your
application hosts from increased response times. Therefore, it is important to configure
each component correctly.

112 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Use a SAN performance monitoring tool, such as IBM Tivoli Storage Productivity Center,
which allows you to continuously monitor the SAN components for error conditions and
performance problems. This tool helps you detect potential issues before they affect your
disaster recovery solution.
򐂰 The long-distance link between the two clustered systems must be provisioned to allow for
the peak application write workload to the Global Mirror source volumes and the
client-defined level of background copy.
򐂰 The peak application write workload ideally must be determined by analyzing the SVC
performance statistics.
򐂰 Statistics must be gathered over a typical application I/O workload cycle, which might be
days, weeks, or months, depending on the environment on which the SVC is used. These
statistics must be used to find the peak write workload that the link must support.
򐂰 Characteristics of the link can change with use. For example, latency can increase as the
link is used to carry an increased bandwidth. The user must be aware of the link’s behavior
in such situations and ensure that the link remains within the specified limits. If the
characteristics are not known, testing must be performed to gain confidence of the link’s
suitability.
򐂰 Users of Global Mirror must consider how to optimize the performance of the
long-distance link, which depends on the technology that is used to implement the link. For
example, when you are transmitting FC traffic over an IP link, you might want to enable
jumbo frames to improve efficiency.
򐂰 The use of Global Mirror and Metro Mirror between the same two clustered systems is
supported.
򐂰 The use of Global Mirror and Metro Mirror between the SVC clustered system and IBM
Storwize systems with a minimum code level of 6.3 is supported.
򐂰 Support exists for cache-disabled volumes to participate in a Global Mirror relationship;
however, this design is not a preferred practice.
򐂰 The gmlinktolerance parameter of the remote copy partnership must be set to an
appropriate value. The default value is 300 seconds (5 minutes), which is appropriate for
most clients.
򐂰 During SAN maintenance, the user must choose to reduce the application I/O workload
during maintenance (so that the degraded SAN components can manage the new
workload); disable the gmlinktolerance feature; increase the gmlinktolerance value
(which means that application hosts might see extended response times from Global
Mirror volumes); or stop the Global Mirror relationships.
If the gmlinktolerance value is increased for maintenance lasting x minutes, it must be
reset only to the normal value x minutes after the end of the maintenance activity.
If gmlinktolerance is disabled during maintenance, it must be re-enabled after the
maintenance is complete.
򐂰 Global Mirror volumes must have their preferred nodes evenly distributed between the
nodes of the clustered systems. Each volume within an I/O Group has a preferred node
property that can be used to balance the I/O load between nodes in that group.
Figure 3-24 on page 114 shows the correct relationship between volumes in a Metro
Mirror or Global Mirror solution.

Chapter 3. Planning and configuration 113


Figure 3-24 Correct volume relationship

򐂰 The capabilities of the storage controllers at the secondary clustered system must be
provisioned to allow for the peak application workload to the Global Mirror volumes, plus
the client-defined level of background copy, plus any other I/O being performed at the
secondary site. The performance of applications at the primary clustered system can be
limited by the performance of the back-end storage controllers at the secondary clustered
system to maximize the amount of I/O that applications can perform to Global Mirror
volumes.
򐂰 A complete review must be performed before Serial Advanced Technology Attachment
(SATA) for Metro Mirror or Global Mirror secondary volumes is used. The use of a slower
disk subsystem for the secondary volumes for high-performance primary volumes can
mean that the SVC cache might not be able to buffer all the writes, and flushing cache
writes to SATA might slow I/O at the production site.
򐂰 Storage controllers must be configured to support the Global Mirror workload that is
required of them. You can dedicate storage controllers to only Global Mirror volumes,
configure the controller to ensure sufficient quality of service (QoS) for the disks that are
used by Global Mirror, or ensure that physical disks are not shared between Global Mirror
volumes and other I/O, for example, by not splitting an individual RAID array.
򐂰 MDisks within a Global Mirror storage pool must be similar in their characteristics, for
example, RAID level, physical disk count, and disk speed. This requirement is true of all
storage pools, but maintaining performance is important when Global Mirror is used.
򐂰 When a consistent relationship is stopped, for example, by a persistent I/O error on the
intercluster link, the relationship enters the consistent_stopped state. I/O at the primary
site continues, but the updates are not mirrored to the secondary site. Restarting the
relationship begins the process of synchronizing new data to the secondary disk. While
this synchronization is in progress, the relationship is in the inconsistent_copying state.
Therefore, the Global Mirror secondary volume is not in a usable state until the copy
completes and the relationship returns to a Consistent state. For this reason, it is highly
advisable to create a FlashCopy of the secondary volume before the relationship is
restarted. When started, the FlashCopy provides a consistent copy of the data, even while
the Global Mirror relationship is copying.
If the Global Mirror relationship does not reach the Synchronized state (for example, if the
intercluster link experiences further persistent I/O errors), the FlashCopy target can be
used at the secondary site for disaster recovery purposes.

114 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 If you plan to use a Fibre Channel over IP (FCIP) intercluster link, it is important to design
and size the pipe correctly.
Example 3-2 shows a best-guess bandwidth sizing formula.

Example 3-2 Wide area network (WAN) link calculation example


Amount of write data within 24 hours times 4 to allow for peaks
Translate into MB/s to determine WAN link needed
Example:
250 GB a day
250 GB * 4 = 1 TB
24 hours * 3600 secs/hr. = 86400 secs
1,000,000,000,000/ 86400 = approximately 12 MB/s,
Which means OC3 or higher is needed (155 Mbps or higher)

򐂰 If compression is available on routers or WAN communication devices, smaller pipelines


might be adequate. Workload is probably not evenly spread across 24 hours. If extended
periods of high data change rates exist, consider suspending Global Mirror during that
time frame.
򐂰 If the network bandwidth is too small to handle the traffic, the application write I/O
response times might be elongated. For the SVC, Global Mirror must support short-term
“Peak Write” bandwidth requirements. SVC Global Mirror is much more sensitive to a lack
of bandwidth than the DS8000.
򐂰 You must also consider the initial sync and resync workload. The Global Mirror
partnership’s background copy rate must be set to a value that is appropriate to the link
and secondary back-end storage. The more bandwidth that you give to the sync and
resync operation, the less workload can be delivered by the SVC for the regular data
traffic.
򐂰 Do not propose Global Mirror if the data change rate exceeds the communication
bandwidth or if the round-trip latency exceeds 80 - 120 ms. A round-trip latency that is
greater than 80 ms requires SCORE/RPQ submission.

3.3.12 SAN boot support


The IBM SVC supports SAN boot or start-up for AIX, Microsoft Windows Server, and other
operating systems. Because SAN boot support can change, check the following websites
regularly:
򐂰 http://www.ibm.com/support/docview.wss?uid=ssg1S1004946
򐂰 http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

3.3.13 Data migration from a non-virtualized storage subsystem


Data migration is an important part of an SVC implementation. Therefore, you must
accurately prepare a data migration plan. You might need to migrate your data for one of the
following reasons:
򐂰 To redistribute workload within a clustered system across the disk subsystem
򐂰 To move workload onto newly installed storage
򐂰 To move workload off old or failing storage, ahead of decommissioning it
򐂰 To move workload to rebalance a changed workload
򐂰 To migrate data from an older disk subsystem to SVC-managed storage
򐂰 To migrate data from one disk subsystem to another disk subsystem

Chapter 3. Planning and configuration 115


Because multiple data migration methods are available, choose the method that best fits your
environment, operating system platform, type of data, and application’s service-level
agreement (SLA).

We can define data migration as belonging to the following groups:


򐂰 Based on operating system Logical Volume Manager (LVM) or commands
򐂰 Based on special data migration software
򐂰 Based on the SVC data migration feature

With data migration, apply the following guidelines:


򐂰 Choose which data migration method best fits your operating system platform, type of
data, and SLA.
򐂰 Check the following interoperability matrix for the storage subsystem to which your data is
being migrated:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004946
򐂰 Choose where you want to place your data after migration in terms of the storage pools
that relate to a specific storage subsystem tier.
򐂰 Check whether enough free space or extents are available in the target storage pool.
򐂰 Decide whether your data is critical and must be protected by a volume mirroring option or
whether it must be replicated in a remote site for disaster recovery.
򐂰 Prepare offline all of the zone and LUN masking and host mappings that you might need
to minimize downtime during the migration.
򐂰 Prepare a detailed operation plan so that you do not overlook anything at data migration
time.
򐂰 Run a data backup before you start any data migration. Data backup must be part of the
regular data management process.
򐂰 You might want to use the SVC as a data mover to migrate data from a non-virtualized
storage subsystem to another non-virtualized storage subsystem. In this case, you might
have to add checks that relate to the specific storage subsystem that you want to migrate.
Be careful when you are using slower disk subsystems for the secondary volumes for
high-performance primary volumes because the SVC cache might not be able to buffer all
the writes and flushing cache writes to SATA might slow I/O at the production site.

3.3.14 SVC configuration backup procedure


Save the configuration externally when changes, such as adding new nodes and disk
subsystems, are performed on the clustered system. Saving the configuration is a crucial part
of SVC management, and various methods can be applied to back up your SVC
configuration. The preferred practice is to implement an automatic configuration backup by
applying the configuration backup command. For more information about this command for
the CLI, see Chapter 9, “SAN Volume Controller operations using the command-line
interface” on page 493. For more information about the GUI operation, see Chapter 10, “SAN
Volume Controller operations using the GUI” on page 655.

116 Implementing the IBM System Storage SAN Volume Controller V7.4
3.4 Performance considerations
Although storage virtualization with the SVC improves flexibility and provides simpler
management of a storage infrastructure, it can also provide a substantial performance
advantage for various workloads. The SVC caching capability and its ability to stripe volumes
across multiple disk arrays are the reasons why performance improvement is significant when
it is implemented with midrange disk subsystems. This technology is often provided only with
high-end enterprise disk subsystems.

Tip: Technically, almost all storage controllers provide both striping (RAID 5 or RAID 10)
and a form of caching. The real benefit is the degree to which you can stripe the data
across all MDisks in a storage pool and therefore, have the maximum number of active
spindles at one time. The caching is secondary. The SVC provides additional caching to
the caching that midrange controllers provide (usually a couple of GB). Enterprise systems
have much larger caches.

To ensure the performance that you want and capacity of your storage infrastructure,
undertake a performance and capacity analysis to reveal the business requirements of your
storage environment. When this analysis is done, you can use the guidelines in this chapter to
design a solution that meets the business requirements.

When you are considering performance for a system, always identify the bottleneck and,
therefore, the limiting factor of a specific system. You must also consider the component for
whose workload you identify a limiting factor. The component might not be the same
component that is identified as the limiting factor for other workloads.

When you are designing a storage infrastructure with the SVC or implementing an SVC in an
existing storage infrastructure, you must consider the performance and capacity of the SAN,
disk subsystems, SVC, and the known or expected workload.

3.4.1 SAN
The SVC now has the following models:
򐂰 2145-8A4
򐂰 2145-8F4
򐂰 2145-8G4
򐂰 2145-CF8
򐂰 2145-CG8
򐂰 2145-DH8

All of these models can connect to 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps switches. From a
performance point of view, connecting the SVC to 8 Gbps or 16 Gbps switches is better.

Correct zoning on the SAN switch brings together security and performance. Implement a
dual HBA approach at the host to access the SVC.

Chapter 3. Planning and configuration 117


3.4.2 Disk subsystems
From a performance perspective, the following guidelines relate to connecting to an SVC:
򐂰 Connect all storage ports to the switch up to a maximum of 16, and zone them to all of the
SVC ports.
򐂰 Zone all ports on the disk back-end storage to all ports on the SVC nodes in a clustered
system.
򐂰 Also, ensure that you configure the storage subsystem LUN-masking settings to map all
LUNs that are used by the SVC to all the SVC WWPNs in the clustered system.

The SVC is designed to handle large quantities of multiple paths from the back-end storage.

In most cases, the SVC can improve performance, especially on mid-sized to low-end disk
subsystems, older disk subsystems with slow controllers, or uncached disk systems, for the
following reasons:
򐂰 The SVC can stripe across disk arrays, and it can stripe across the entire set of supported
physical disk resources.
򐂰 The SVC has a 4 GB, 8 GB, or 24 GB (48 GB with the optional processor card. 2145-CG8
only) cache. Model 2145-DH8 has 32 GB of cache.
򐂰 The SVC can provide automated performance optimization of hot spots by using flash
drives and Easy Tier.

The SVC large cache and advanced cache management algorithms also allow it to improve
on the performance of many types of underlying disk technologies. The SVC capability to
manage (in the background) the destaging operations that are incurred by writes (in addition
to still supporting full data integrity) has the potential to be important in achieving good
database performance.

Depending on the size, age, and technology level of the disk storage system, the total cache
that is available in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in the upper (SVC) or
the lower (disk controller) level of the overall system, the system as a whole can use the
larger amount of cache wherever it is located. Therefore, if the storage control level of the
cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the
SVC cache.

Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on the underlying storage technology and the degree to which the workload exhibits
hotspots or sensitivity to cache size or cache algorithms.

For more information about the SVC cache partitioning capability, see IBM SAN Volume
Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open

118 Implementing the IBM System Storage SAN Volume Controller V7.4
3.4.3 SAN Volume Controller
The SVC clustered system is scalable up to eight nodes, and the performance is nearly linear
when more nodes are added into an SVC clustered system until it becomes limited by other
components in the storage infrastructure. Although virtualization with the SVC provides a
great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems
that can deliver the performance that you want. Essentially, SVC performance improvements
are gained by having as many MDisks as possible, which creates a greater level of concurrent
I/O to the back end without overloading a single disk or array.

Assuming that no bottlenecks exist in the SAN or on the disk subsystem, you must follow
specific guidelines when you perform the following tasks:
򐂰 Creating a storage pool
򐂰 Creating volumes
򐂰 Connecting to or configuring hosts that must receive disk space from an SVC clustered
system

For more information about performance and preferred practices for the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

3.4.4 Performance monitoring


Performance monitoring must be a part of the overall IT environment. For the SVC and other
IBM storage subsystems, the official IBM tool to collect performance statistics and provide a
performance report is IBM Tivoli Storage Productivity Center.

For more information about using IBM Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open

Chapter 3. Planning and configuration 119


120 Implementing the IBM System Storage SAN Volume Controller V7.4
4

Chapter 4. SAN Volume Controller initial


configuration
This chapter explains the initial configuration that is required for the IBM SAN Volume
Controller (SVC) and includes the following topics:
򐂰 Managing the cluster
򐂰 Setting up the SAN Volume Controller cluster
򐂰 Configuring the GUI
򐂰 Secure Shell overview
򐂰 Using IPv6

© Copyright IBM Corp. 2015. All rights reserved. 121


4.1 Managing the cluster
You can manage the SVC in many ways. The following methods are the most common
methods:
򐂰 By using the SVC Management GUI
򐂰 By using a PuTTY-based SVC command-line interface (CLI)
򐂰 By using IBM Tivoli Storage Productivity Center

Figure 4-1 shows the various ways to manage an SVC cluster.

Figure 4-1 SVC cluster management

You have full management control of the SVC regardless of which method you choose. IBM
Tivoli Storage Productivity Center is a robust software product with various functions that
must be purchased separately.

If you have a previously installed SVC cluster in your environment, it is possible that you are
using the SVC Console, which is also known as the Hardware Management Console (HMC).
You can still use it with IBM Tivoli Storage Productivity Center. When you are using the
specific, retail product that is called IBM System Storage Productivity Center (SSPC), which
is no longer offered, you can log in to only your SVC from one of them at a time.

If you decide to manage your SVC cluster with the SVC CLI, it does not matter whether you
are using the SVC Console or IBM Tivoli Storage Productivity Center server because the
SVC CLI is on the cluster and accessed through Secure Shell (SSH), which can be installed
anywhere.

122 Implementing the IBM System Storage SAN Volume Controller V7.4
4.1.1 Network requirements for SAN Volume Controller
To plan your installation, consider the TCP/IP address requirements of the SVC cluster and
the requirements for the SVC cluster to access other services. You must also plan the
address allocation and the Ethernet router, gateway, and firewall configuration to provide the
required access and network security.

Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.

Figure 4-2 TCP/IP ports

For more information about TCP/IP prerequisites, see Chapter 3, “Planning and
configuration” on page 73.

To assist you in starting an SVC initial configuration, Figure 4-3 on page 124 shows a
common flowchart that covers all of the types of management.

Chapter 4. SAN Volume Controller initial configuration 123


Figure 4-3 SVC initial configuration flowchart

In the next sections, we describe each of the steps in Figure 4-3.

4.2 Setting up the SAN Volume Controller cluster


This section provides the step-by-step instructions that are needed to create the cluster. You
must create a cluster to use SVC virtualized storage. The first phase to create a cluster
consists of configuration 1, which is performed by using the Technician Port, and
configuration 2, which is performed from the front panel of the SVC (for more information, see
4.2.3, “Initiating the cluster from the front panel” on page 150). The second phase is
performed from a web browser by accessing the management GUI.

For our initial configuration (configuration 1), we use the following hardware:
򐂰 2 x 2145-DH8 nodes
򐂰 1 x 32 GB additional memory for each 2145-DH8 node (total 64 GB of memory per node)
򐂰 1 x processor that is additional for each 2145-DH8 node (total two processors per node)
򐂰 1 x Real-time Compression (RtC)
򐂰 Accelerator card for each 2145-DH8 node
򐂰 1 x four port host bus adapters (HBAs) in each 2145-DH8 node
򐂰 2 x SAN switches (for a redundant SAN fabric)

124 Implementing the IBM System Storage SAN Volume Controller V7.4
The back-end storage consists of Storwize V7000 block storage arrays.

The first step is to connect a PC or notebook to the Technician Port on the rear of the SVC
node. See Figure 4-4 for the Technician Port. The Technician Port provides a Dynamic Host
Configuration Protocol (DHCP) IP address V4, so you must ensure that your PC or notebook
is configured for DHCP. The “default” IP address for a new node is 192.168.0.1.

The 2145-DH8 does not provide IPv6 IP addresses for the Technician Port.

For configuration 2, we use the following hardware:


򐂰 2 x 2145-CF8 nodes
򐂰 1 x four port HBAs in each 2145-CF8 node
򐂰 2 x SAN switches (for a redundant SAN fabric)

The back-end storage consists of Storwize V7000 block storage arrays.

Figure 4-4 Rear of 2145-DH8

Nodes: During the initial configuration, you see certificate warnings because the 2145
certificates are self-issued. Accept these warnings because they are not harmful.

Follow these steps for the initial configuration phase:


1. After you connect your personal computer or notebook to the Technician Port and validate
that you have an IP v4 DCHP address, for example, 192.168.0.12 (the first IP address that
the SVC node assigns), open a supported browser. The browser automatically redirects
you to 192.168.0.1, and the initial configuration of the cluster can start.
2. Figure 4-5 on page 126 shows the Welcome panel and starts the wizard that allows you to
configure a new system or expand an existing system.

Chapter 4. SAN Volume Controller initial configuration 125


Figure 4-5 Welcome panel

3. The Service Setup panel opens, as shown in Figure 4-6.

Figure 4-6 Service Setup

4. This chapter focuses on setting up a new system, so we mark As the first node in a new
system and click Next.

Important: If you are adding 2145-DH8 nodes to an existing system, ensure that the
existing systems are running code level 7.3 or higher. The 2145-DH8 only supports
code level 7.3 or higher.

5. The next panel prompts you to set an IP address for the cluster. You can choose between
an IPv4 or IPv6 address. In Figure 4-7 on page 127, we set an IPv4 address.

126 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-7 Setting the IP address

6. When you click Next, the Create System panel opens, as shown in Figure 4-8.

Figure 4-8 Create the system

7. Click Close and the web server restarts (Figure 4-9).

Figure 4-9 Restarting Web Server window

Chapter 4. SAN Volume Controller initial configuration 127


8. When the Web Server is restarted, which take about 2 minutes, the Summary panel opens
(Figure 4-10).

Figure 4-10 System Summary

9. Follow the instructions on Figure 4-10. Disconnect the Ethernet cable from the Technician
Port and your personal computer or notebook. Connect the same personal computer or
notebook to the same network as the system. Click Finish to be redirected to the GUI to
complete the system setup. You can connect to the system IP address from any
management console that is connected to the same network as the system.
10.Whether you are redirected from your personal computer or notebook or connecting to the
Management IP address of the system, the License Agreement panel opens
(Figure 4-11).

Figure 4-11 SAN Volume Controller License Agreement panel

128 Implementing the IBM System Storage SAN Volume Controller V7.4
11.Read the license agreement and then click the Accept arrow. The login panel opens
(Figure 4-12).

Figure 4-12 Log in as superuser

12.You must type the default password for the account superuser, which is passw0rd (zero
and not o). When you click the Log in arrow, you are prompted to change the password
(Figure 4-13).

Figure 4-13 Change initial password

13.Type a new password and type it again to confirm it. The password length is 6 - 63
characters. The password cannot begin or end with a space. After you type the password
twice, click the Log in arrow again.

Chapter 4. SAN Volume Controller initial configuration 129


14.The Welcome to System Setup panel opens (Figure 4-14).

Figure 4-14 Welcome to System Setup panel

15.Click Next. You can choose to give the cluster a new name. We used ITSO_SVC2, as
shown in Figure 4-15 on page 131.

130 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-15 Cluster name

16.Click Apply and Next after you type the name of the cluster.
17.The next step is to set the time and date, as shown in Figure 4-16 on page 132.

Chapter 4. SAN Volume Controller initial configuration 131


Figure 4-16 Setting the date and time

18.In this case, we set the date and time manually. At this time, you cannot choose to use the
24-hour clock; however, you can change to the 24-hour clock after you complete the initial
configuration. However, we recommend that you use a Network Time Protocol (NTP)
server so that all of your SAN and storage devices have a common time stamp for
troubleshooting.
19.Click Apply and Next. The Licensed Functions panel opens, as shown in Figure 4-17 on
page 133.

132 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-17 Licensed Functions

20.Enter the total purchased capacity for your system as authorized by your license
agreement. (Figure 4-17 is only an example.) Click Apply and Next. The Configure
System Topology panel opens, as shown in Figure 4-18 on page 134.

Chapter 4. SAN Volume Controller initial configuration 133


Figure 4-18 Configure System Topology

21.You can either choose Single site or Multiple sites (which is also known as a stretched
system). If you are installing a stretched system, the panel that is shown in Figure 4-19 on
page 135 opens.

134 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-19 Site names

22.Choose the names that you want for each site or keep the defaults. (These names can be
changed after the initial configuration is complete.) Click Apply and Next.
23.The Assign Nodes panel opens (Figure 4-20 on page 136).

Chapter 4. SAN Volume Controller initial configuration 135


Figure 4-20 Assign Nodes

24.If the first node is located at the other site and you are on site2, for example, you can move
the node according to the site or site name. Click the icon with opposing arrows, as shown
in Figure 4-21 on page 137.

136 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-21 Node site change

25.As you can see now, node1 is now located in site2. This assignment can be reversed
again if the change was a mistake or you change your mind. You can use the other radio
button to scan for other node candidates (if you are configuring a system with more than
two nodes).
26.Next, add a node from the drop-down menu, as shown in Figure 4-22 on page 138.

Chapter 4. SAN Volume Controller initial configuration 137


Figure 4-22 Add a node

27.Choose the node that complies with the correct panel ID for that site (if you have more
than two nodes in the cluster).
28.Figure 4-23 on page 139 shows that the node was added. The node icon changed from an
outline to an actual node icon.

138 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-23 New node added

29.Click Next. The External Storage panel (Figure 4-24 on page 140) opens. You use this
panel to optionally assign external storage to a site. This panel requires that the external
storage is already zoned to the SVC.

Chapter 4. SAN Volume Controller initial configuration 139


Figure 4-24 External Storage

30.In the following panels, we show how to assign a storage controller to a specific site. In
Figure 4-25 on page 141, we choose the controller that we want to assign to a specific
site.

140 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-25 Modify Site controller0

31.Right-click the controller that you want to assign and select Modify Site. Use the
drop-down menu to select the site (Figure 4-26).

Figure 4-26 Select a site

32.Choose the site where the controller is located. After you select the site, click Next.
33.The site name displays in the External Storage panel that is shown in Figure 4-27 on
page 142.

Chapter 4. SAN Volume Controller initial configuration 141


Figure 4-27 Controller site selected

34.Repeat the same steps for all of the controllers and click Next. The external storage site
assignment is complete.
35.Figure 4-28 shows the Email Event Notifications configuration panel.

Figure 4-28 Email Event Notifications

36.Setting up email event notifications is optional, but we recommend that you set them up.
The next panels show how to set up the email event notifications.

142 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: A valid Simple Mail Transfer Protocol (SMTP) server IP address must be
available to complete this step.

37.In the first panel, which is shown in Figure 4-29, set the system location information.

Figure 4-29 System Location panel

38.Click Next to enter the contact details, as shown in Figure 4-30 on page 144.

Chapter 4. SAN Volume Controller initial configuration 143


Figure 4-30 Contact Details

39.Click Apply and Next.


40.Enter the IP addresses of the email servers, as shown in Figure 4-30.

144 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-31 Email server IP address

41.You can click Ping to verify whether network access exists to the email server (SMTP
server).
42.Enter the local users who will receive notifications when an event occurs (Figure 4-32 on
page 146).

Chapter 4. SAN Volume Controller initial configuration 145


Figure 4-32 Email Notifications

43.The callhome@de.ibm.com email address is a default user that cannot be deleted. You can
add more users to receive email notifications by clicking the plus (+) icon. You can select
the type of notifications to send to the defined users. The following notification types are
available:
– Errors
– Events
– Notifications
– Inventory
44.Click Apply and Next. The Summary panel opens (Figure 4-33 on page 147).

146 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-33 Summary

45.Click Finish to complete the initial configuration (configuration 1), which takes you to the
System overview panel. The view depends on whether the initial configuration was
created as a single site or as a stretched system. The following panel shows a single site
system (Figure 4-34).

Figure 4-34 Single site system overview

46.If you configured the system as a stretched system, you see the panel that is shown in
Figure 4-35 on page 148.

Chapter 4. SAN Volume Controller initial configuration 147


Figure 4-35 Stretched system overview

Now, the initial configuration is complete. Next, you configure the storage systems, hosts, and
so on.

4.2.1 Service panels


This section provides an overview of the service panels that are available, depending on your
SVC nodes.

Figure 4-36 shows the SVC node for the 2145-8G4 and 2145-8A4 models.

Figure 4-36 SVC 8G4 node front and operator panel

Figure 4-37 on page 149 shows the SVC node 2145-CF8 front panel.

148 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-37 SVC CF8 front panel

Figure 4-38 shows the SVC node 2145-CG8 model.

Figure 4-38 SVC CG8 node front and operator panel

SVC V6.1 and later code levels introduce a new method for performing service tasks. In
addition to performing service tasks from the front panel, you can service a node through an
Ethernet connection by using a web browser or the CLI. Another service IP address for each
node canister is required.

4.2.2 Prerequisites
Ensure that the SVC nodes are installed and that Ethernet connectivity and Fibre Channel
(FC) connectivity are configured correctly. For more information about the physical
connectivity to the SVC, see Chapter 3, “Planning and configuration” on page 73.

Chapter 4. SAN Volume Controller initial configuration 149


Before you configure the cluster, ensure that the following information is available:
򐂰 License
The licenses indicate whether the client is permitted to use FlashCopy, Remote Copy, and
Real-time Compression (RtC). The license also indicates how much capacity the client is
licensed to virtualize.
򐂰 For IPv4 addressing:
– Cluster IPv4 addresses: These addresses include one address for the cluster and
another address for the service address.
– IPv4 subnet mask.
– Gateway IPv4 address.
򐂰 For IPv6 addressing:
– Cluster IPv6 addresses: These addresses include one address for the cluster and
another address for the service address.
– IPv6 prefix.
– Gateway IPv6 address.

You perform the first step to create a cluster from the front panel of the SVC. The second step
is performed from a web browser by accessing the management GUI.

Important: The IBM SVC must always be configured in a cluster. Stand-alone


configuration is not supported and it can lead to data loss and degraded performance.

4.2.3 Initiating the cluster from the front panel


After the hardware is installed in the racks, you need to configure (initiate) the cluster through
the physical service panel. This procedure applies to the following models:
򐂰 2145-8A4
򐂰 2145-8G4
򐂰 2145-CF8
򐂰 2145-CG8

For more information, see 4.2.1, “Service panels” on page 148.

Follow these steps to perform the second configuration (configuration 2) of phase one:
1. Choose any node that is a member of the cluster that is being created.

Nodes: After you successfully create and initialize the cluster on the selected node, use
a separate process to add nodes to your cluster.

2. Press and release the Up or Down button until Actions is displayed.

Important: During these steps, if a timeout occurs while you are entering the input for
the fields, you must begin again from step 2. All of the changes are lost, so ensure that
you have all of the information available before you begin again.

3. Press and release the Select button.

150 Implementing the IBM System Storage SAN Volume Controller V7.4
4. Depending on whether you are creating a cluster with an IPv4 address or an IPv6
address, press and release the Up or Down button until New Cluster IPv4? or New
Cluster IPv6? is displayed.
Figure 4-39 shows the various options for the cluster creation.

Figure 4-39 Cluster IPv4? and Cluster IPv6? options on the front panel display

If the New Cluster IPv4? or New Cluster IPv6? action is displayed, move to step 5.
If the New Cluster IPv4? or New Cluster IPv6? action is not displayed, this node is
already a member of a cluster. Complete the following steps:
a. Press and release the Up or Down button until Actions is displayed.
b. Press and release the Select button to return to the Main Options menu.
c. Press and release the Up or Down button until Cluster: is displayed. The name of the
cluster to which the node belongs is displayed on line two of the panel.
In this case, you have two options:
Your first option is to delete this node from the cluster by completing the following
steps:
i. Press and release the Up or Down button until Actions is displayed.
ii. Press and release the Select button.
iii. Press and release the Up or Down button until Remove Cluster? is displayed.
iv. Press and hold the Up button.
v. Press and release the Select button.
vi. Press and release the Up or Down button until Confirm remove? is displayed.
vii. Press and release the Select button.
viii.Release the Up button, which deletes the cluster information from the node.
ix. Return to step 1 on page 150 and start again.
Your second option (if you do not want to remove this node from an existing cluster) is
to review the situation to determine the correct nodes to include in the new cluster.
5. Press and release the Select button to create the cluster.
6. Press and release the Select button again to modify the IP address.
7. Use the Up or Down navigation button to change the value of the first field of the
IP address to the value that was chosen.

Chapter 4. SAN Volume Controller initial configuration 151


IPv4 and IPv6: Consider the following points:
򐂰 For IPv4, pressing and holding the Up or Down buttons increments or decreases the
IP address field by units of 10. The field value rotates 0 - 255 with the Down button,
and 255 - 0 with the Up button.
򐂰 For IPv6, the address and the gateway address consist of eight 4-digit hexadecimal
values. Enter the full address by working across a series of four panels to update
each of the four-digit hexadecimal values that make up the IPv6 addresses. The
panels consist of eight fields, where each field is a four-digit hexadecimal value.

8. Use the Right navigation button to move to the next field. Use the Up or Down navigation
button to change the value of this field.
9. Repeat step 7 for each of the remaining fields of the IP address.
10.When the last field of the IP address is changed, press the Select button.
11.Press the Right arrow button:
– For IPv4, IPv4 Subnet: is displayed.
– For IPv6, IPv6 Prefix: is displayed.
12.Press the Select button.
13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were
changed. There is only a single field for IPv6 Prefix.
14.When the last field of IPv4 Subnet/IPv6 Mask is changed, press the Select button.
15.Press the Right navigation button:
– For IPv4, IPv4 Gateway: is displayed.
– For IPv6, IPv6 Gateway: is displayed.
16.Press the Select button.
17.Change the fields for the appropriate gateway in the same way that the IPv4/IPv6 address
fields were changed.
18.When the changes to all of the Gateway fields are made, press the Select button.
19.To review the settings before the cluster is created, use the Right and Left buttons. Make
any necessary changes, use the Right and Left buttons to see “Confirm Created?”, and
then press the Select button.
20.After you complete this task, the following information is displayed on the service display
panel:
– Cluster: is displayed on line one.
– A temporary, system-assigned cluster name that is based on the IP address is
displayed on line two.
If the cluster is not created, Create Failed: is displayed on line one of the service display.
Line two contains an error code. For more information about the error codes and to
identify the reason why the cluster creation failed and the corrective action to take, see
IBM System Storage SAN Volume Controller: Service Guide, GC26-7901.

When you create the cluster from the front panel with the correct IP address format, you can
finish the cluster configuration by accessing the management GUI, completing the Create
Cluster wizard, and adding other nodes to the cluster.

152 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: At this time, do not repeat this procedure to add other nodes to the cluster.

To add nodes to the cluster, follow the steps that are described in Chapter 9, “SAN Volume
Controller operations using the command-line interface” on page 493 and Chapter 10,
“SAN Volume Controller operations using the GUI” on page 655.

4.3 Configuring the GUI


After you complete the tasks that are described in 4.2, “Setting up the SAN Volume Controller
cluster” on page 124, complete the cluster setup by using the SVC Console. Follow the steps
that are described in 4.3.1, “Completing the Create Cluster wizard” on page 153 to create the
cluster and complete the configuration.

Important: Ensure that the SVC cluster IP address (svcclusterip) can be reached
successfully by using a ping command from the network.

4.3.1 Completing the Create Cluster wizard


You can access the management GUI by opening any supported web browser. Complete the
following steps:
1. Open the web GUI from the SSPC Console or from a supported web browser on any
workstation that can communicate with the cluster. We suggest that you use Firefox
Version 6 for your web browser.
2. Open a supported web browser and point to the IP address that you entered in step 7 on
page 151:
http://svcclusteripaddress/
(You are redirected to https://svcclusteripaddress/, which is the default address for
access to the SVC cluster.)
The remaining steps are the same as though it is a 2145-DH8 cluster; therefore, we do not
show the panels again in this book.

4.3.2 Post-requisites
Perform the following steps to complete the SVC cluster configuration:
1. Configure the SSH keys for the command-line user, as shown in 4.4, “Secure Shell
overview” on page 154.
2. Configure user authentication and authorization.
3. Set up event notifications and inventory reporting.
4. Create the storage pools.
5. Add an MDisk to the storage pool.
6. Identify and create volumes.
7. Create a map host objects map.
8. Identify and configure the FlashCopy mappings and Metro Mirror relationship.
9. Back up configuration data.

Chapter 4. SAN Volume Controller initial configuration 153


4.4 Secure Shell overview
Secure Shell (SSH) can be used to access the SVC with a command-line interface (CLI),
such as PuTTY. You can choose between password or SSH key authentication, or you can
choose both password and SSH key authentication for the SVC CLI. We describe SSH in the
following sections.

Tip: If you choose not to create an SSH key pair, you can still access the SVC cluster by
using the SVC CLI, if you have a user password. You are authenticated through the user
name and password.

The connection is secured by using a private key and a public key pair. Securing the
connection includes the following steps:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the SSH server (SVC cluster).
3. A private key identifies the client. The private key is checked against the public key during
the connection. The private key must be protected.
4. Also, the SSH server must identify itself with a specific host key.
5. If the client does not have that host key yet, it is added to a list of known hosts.

SSH is the communication vehicle between the management system (the System Storage
Productivity Center or any workstation) and the SVC cluster.

The SSH client provides a secure environment from which to connect to a remote machine. It
uses the principles of public and private keys for authentication.

SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administrative and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.

To use the CLI, an SSH client must be installed on that system, the SSH key pair must be
generated on the client system, and the client’s SSH public key must be stored on the SVC
clusters.

You must preinstall the freeware implementation of SSH2 for Microsoft Windows (which is
called PuTTY) on the System Storage Productivity Center or any other workstation. This
software provides the SSH client function for users who are logged in to the SVC Console
and who want to start the CLI to manage the SVC cluster.

4.4.1 Generating public and private SSH key pairs by using PuTTY
Complete the following steps to generate SSH keys on the SSH client system:
1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start → Programs → PuTTY → PuTTYgen.
2. In the PuTTY Key Generator GUI window (Figure 4-40 on page 155), complete the
following steps to generate the keys:
a. Select SSH-2 RSA.

154 Implementing the IBM System Storage SAN Volume Controller V7.4
b. Leave the number of bits in a generated key value at 1024.
c. Click Generate.

Figure 4-40 PuTTY Key Generator GUI

3. Move the cursor onto the blank area to generate the keys.

To generate keys: The blank area is the large blank rectangle on the GUI inside the
section of the GUI labeled Key (Figure 4-40 on page 155). Continue to move the mouse
pointer over the blank area until the progress bar reaches the far right. This action
generates random characters to create a unique key pair.

4. After the keys are generated, save them for later use by completing the following steps:
a. Click Save public key, as shown in Figure 4-41 on page 156.

Chapter 4. SAN Volume Controller initial configuration 155


Figure 4-41 Saving the public key

b. You are prompted for a name, for example, pubkey, and a location for the public key, for
example, C:\Support Utils\PuTTY. Click Save.
If another name and location are chosen, ensure that you maintain a record of the
name and location. You must specify the name and location of this SSH public key in
the steps that are described in 4.4.2, “Uploading the SSH public key to the SAN
Volume Controller cluster” on page 157.

Tip: The PuTTY Key Generator saves the public key with no extension, by default.
Use the string pub in naming the public key, for example, pubkey, to differentiate the
SSH public key from the SSH private key easily.

c. In the PuTTY Key Generator window, click Save private key.


d. You are prompted with a warning message, as shown in Figure 4-42. Click Yes to save
the private key without a passphrase.

Figure 4-42 Saving the private key without a passphrase

156 Implementing the IBM System Storage SAN Volume Controller V7.4
e. When prompted, enter a name, for example, icat, and a location for the private key, for
example, C:\Support Utils\PuTTY. Click Save.
We suggest that you use the default name icat.ppk because this key was used for icat
application authentication and must have this default name in SVC clusters that are
running on versions before SVC 5.1.

Private key extension: The PuTTY Key Generator saves the private key with the
PPK extension.

5. Close the PuTTY Key Generator GUI.


6. Browse to the directory, for example, C:\Support Utils\PuTTY, where the private key was
saved.

4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster
After you create your SSH key pair, you must upload your SSH private key onto the SVC
cluster. Complete the following steps:
1. From your browser, enter https://svcclusteripaddress/.
Alternatively, from the GUI interface, you can go to the Access Management interface and
select Users.
2. In the next window, as shown in Figure 4-43, select Create User to create a user.

Figure 4-43 Create a user

Chapter 4. SAN Volume Controller initial configuration 157


3. From the window to create a user, as shown in Figure 4-44, enter the Name (user ID) that
you want to create and then enter the password twice. Select the access level that you
want to assign to your user. The Security Administrator (SecurityAdmin) is the maximum
access level. Select the location from which you want to upload the SSH Public Key file
that you created for this user. Click Create.

Figure 4-44 Create the user name and password

You completed the user creation process and uploaded the users’ SSH public key that is
paired later with the users’ private.ppk key, as described in 4.4.3, “Configuring the PuTTY
session for the CLI” on page 158. Figure 4-47 on page 161 shows the successful upload of
the SSH admin key.

The requirements for the SVC cluster setup by using the SVC cluster web interface are
complete.

4.4.3 Configuring the PuTTY session for the CLI


Before the CLI can be used, you must configure the PuTTY session by using the SSH keys
that were generated in 4.4.1, “Generating public and private SSH key pairs by using PuTTY”
on page 154, or by user name if you configured the user without an SSH key.

158 Implementing the IBM System Storage SAN Volume Controller V7.4
Complete the following steps to configure the PuTTY session on the SSH client system:
1. From the System Storage Productivity Center on a Microsoft Windows desktop, select
Start → Programs → PuTTY → PuTTY to open the PuTTY Configuration GUI window.
2. From the Category pane on the left in the PuTTY Configuration window (Figure 4-45), click
Session if it is not selected.

Tip: The items that you select in the Category pane affect the content that appears in
the right pane.

Figure 4-45 PuTTY Configuration window

3. Under the “Specify the destination you want to connect to” section in the right pane, select
SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures
that if any connection errors occur, they are displayed in the user’s window.

Chapter 4. SAN Volume Controller initial configuration 159


4. From the Category pane on the left, select Connection → SSH to display the PuTTY SSH
connection configuration window, as shown in Figure 4-46.

Figure 4-46 PuTTY SSH connection configuration window

5. In the right pane, for the Preferred SSH protocol version, select 2.
6. From the Category pane on the left side of the PuTTY Configuration window, select
Connection → SSH → Auth.

160 Implementing the IBM System Storage SAN Volume Controller V7.4
7. As shown in Figure 4-47, in the “Private key file for authentication:” field under the
Authentication parameters section in the right pane, browse to or enter the fully qualified
directory path and file name of the SSH client private key file (for example, C:\Support
Utils\Putty\icat.PPK) that was created earlier.
You can skip the Connection → SSH → Auth part of the process if you created the user
only with password authentication and no SSH key.

Figure 4-47 PuTTY Configuration: Private key file location for authentication

8. From the Category pane on the left side of the PuTTY Configuration window, click
Session.
9. In the right pane, complete the following steps, as shown in Figure 4-48 on page 162:
a. Under the “Load, save, or delete a stored session” section, select Default Settings,
and then click Save.
b. For the Host name (or IP address) field, enter the IP address of the SVC cluster.
c. In the Saved Sessions field, enter a name (for example, SVC) to associate with this
session.
d. Click Save again.

Chapter 4. SAN Volume Controller initial configuration 161


Figure 4-48 PuTTY Configuration: Saving a session

You can now close the PuTTY Configuration window or leave it open to continue.

Tips: Consider the following points:


򐂰 When you enter the Host name or IP address in PuTTY, enter your SVC user name
followed by an At sign (@) followed by your host name or IP address. This way, you do
not need to enter your user name each time that you want to access your SVC cluster.
If you did not create an SSH key, you are prompted for the password that you set for the
user.
򐂰 Normally, output that comes from the SVC is wider than the default PuTTY window size.
Change your PuTTY window appearance to use a font with a character size of 8.
To change, click the Appearance item in the Category tree, as shown in Figure 4-48,
and then, click Font. Choose a font with a character size of 8.

4.4.4 Starting the PuTTY CLI session


The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the
session by completing the following steps:
1. From the SVC Console desktop, open the PuTTY application by selecting Start →
Programs → PuTTY.
2. In the PuTTY Configuration window (Figure 4-49 on page 163), select the session that
was saved earlier (in our example, ITSO-SVC1) and click Load.
3. Click Open.

162 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-49 Open PuTTY command-line session

4. If this is the first time that you use the PuTTY application since you generated and
uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens that
warns that a mismatch exists between the private and public keys, as shown in
Figure 4-50. Click Yes. The CLI starts.

Figure 4-50 PuTTY Security Alert

5. As shown in Example 4-1, the private key that is used in this PuTTY session is now
authenticated against the public key that was uploaded to the SVC cluster.

Example 4-1 Authenticating


Using username "admin".
Authenticating with public key "rsa-key-20100909"
IBM_2145:ITSO_SVC1:admin>

You completed the required tasks to configure the CLI for SVC administration from the SVC
Console. You can close the PuTTY session.

Chapter 4. SAN Volume Controller initial configuration 163


4.4.5 Configuring SSH for IBM AIX clients
To configure SSH for AIX clients, complete the following steps:

Note: You must reach the SVC cluster IP address successfully by using the ping command
from the AIX workstation from which cluster access is wanted.

1. OpenSSL must be installed for OpenSSH to work. Complete the following steps to install
OpenSSH on the AIX client:
a. You can obtain the installation images from the following websites:
• https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp
• http://sourceforge.net/projects/openssh-aix
b. Follow the instructions carefully because OpenSSL must be installed before SSH is
used.
2. Complete the following steps to generate an SSH key pair:
a. Run the cd command to browse to the /.ssh directory.
b. Run the ssh-keygen -t rsa command. The following message is displayed:
Generating public/private rsa key pair. Enter file in which to save the key
(//.ssh/id_rsa)
c. Pressing Enter uses the default file that is shown in parentheses. Otherwise, enter a
file name (for example, aixkey), and then press Enter. The following prompt is
displayed:
Enter a passphrase (empty for no passphrase)
d. When you use the CLI interactively, enter a passphrase because no other
authentication exists when you are connecting through the CLI. After you enter the
passphrase, press Enter. The following prompt is displayed:
Enter same passphrase again:
Enter the passphrase again. Press Enter.
e. A message is displayed indicating that the key pair was created. The private key file
has the name that was entered previously, for example, aixkey. The public key file has
the name that was entered previously with an extension of .pub, for example,
aixkey.pub.

The use of a passphrase: If you are generating an SSH key pair so that you can use
the CLI interactively, use a passphrase so that you must authenticate whenever you
connect to the cluster. You can have a passphrase-protected key for scripted usage, but
you must use the expect command or a similar command to have the passphrase
parsed into the ssh command.

4.5 Using IPv6


You can use IPv4 or IPv6 in a dual-stack configuration. Migrating remotely to (or from) IPv6 is
possible, and the migration is nondisruptive.

Using IPv6: To remotely access the SVC clusters that are running IPv6, you are required
to run a supported web browser and have IPv6 configured on your local workstation.

164 Implementing the IBM System Storage SAN Volume Controller V7.4
4.5.1 Migrating a cluster from IPv4 to IPv6
As a prerequisite, enable and configure IPv6 on your local workstation. In our case, we
configured an interface with IPv4 and IPv6 addresses on the System Storage Productivity
Center, as shown in Example 4-2.

Example 4-2 Output of ipconfig on the System Storage Productivity Center


C:\Documents and Settings\Administrator>ipconfig
Windows IP Configuration
Ethernet adapter IPv6:
Connection-specific DNS Suffix . :
IP Address. . . . . . . . . . . . : 10.0.1.115
Subnet Mask . . . . . . . . . . . : 255.255.255.0
IP Address. . . . . . . . . . . . : 2001:610::115
IP Address. . . . . . . . . . . . : fe80::214:5eff:fecd:9352%5
Default Gateway . . . . . . . . . :

To update a cluster, complete the following steps:


1. Select Settings → Network, as shown in Figure 4-51.

Figure 4-51 Network window

Chapter 4. SAN Volume Controller initial configuration 165


2. Select Management IP Addresses. Click port 1 of one of the nodes, as shown in
Figure 4-52.

Figure 4-52 Management IP Addresses

3. In the window that is shown in Figure 4-53, complete the following steps:
a. Select Show IPv6.
b. Enter an IPv6 address in the IP Address field.
c. Enter an IPv6 gateway in the Gateway field.
d. Enter an IPv6 prefix in the Subnet Mask/Prefix field. The Prefix field can have a value
of 0 - 127.
e. Click OK.

Figure 4-53 Modifying the IP addresses: Adding IPv6 addresses

4. A confirmation window opens, as shown in Figure 4-54 on page 167. Click Apply
Changes.

166 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 4-54 Confirming the changes

5. The Change Management task is started on the server, as shown in Figure 4-55. Click
Close when the task completes.

Figure 4-55 Change Management IP window

6. Test the IPv6 connectivity by using the ping command from a cmd.exe session on your
local workstation, as shown in Example 4-3 on page 168.

Chapter 4. SAN Volume Controller initial configuration 167


Example 4-3 Testing IPv6 connectivity to the SVC cluster
C:\Documents and Settings\Administrator>ping
2001:0610:0000:0000:0000:0000:0000:119

Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data:

Reply from 2001:610::119: time=3ms


Reply from 2001:610::119: time<1ms
Reply from 2001:610::119: time<1ms
Reply from 2001:610::119: time<1ms

Ping statistics for 2001:610::119:


Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round-trip times in milliseconds:
Minimum = 0ms, Maximum = 3ms, Average = 0ms

7. Test the IPv6 connectivity to the cluster by using a compatible IPv6 and SVC web browser
on your local workstation.
8. Remove the IPv4 address in the SVC GUI that is accessing the same windows, as shown
in Figure 4-53 on page 166. Validate this change by clicking OK.

4.5.2 Migrating a cluster from IPv6 to IPv4


The process of migrating a cluster from IPv6 to IPv4 is identical to the process that we
described in 4.5.1, “Migrating a cluster from IPv4 to IPv6” on page 165, except that you add
IPv4 addresses and remove the IPv6 addresses.

168 Implementing the IBM System Storage SAN Volume Controller V7.4
5

Chapter 5. Host configuration


In this chapter, we describe the host configuration procedures that are required to attach
supported hosts to the IBM SAN Volume Controller (SVC).

This chapter includes the following topics:


򐂰 Host attachment overview
򐂰 IBM SAN Volume Controller setup
򐂰 iSCSI
򐂰 AIX-specific information
򐂰 Microsoft Windows information
򐂰 Using SAN Volume Controller CLI from a Windows host
򐂰 Microsoft Volume Shadow Copy
򐂰 Specific Linux (on x86/x86_64) information
򐂰 VMware configuration information
򐂰 Sun Solaris hosts
򐂰 Hewlett-Packard UNIX configuration information
򐂰 Using the SDDDSM, SDDPCM, and SDD web interface
򐂰 More information

© Copyright IBM Corp. 2015. All rights reserved. 169


5.1 Host attachment overview
The SVC supports a wide range of host types (both IBM and non-IBM), which makes it
possible to consolidate storage in an open systems environment into a common pool of
storage. Then, you can use and manage the storage pool more efficiently as a single entity
from a central point on the SAN.

The ability to consolidate storage for attached open systems hosts provides the following
benefits:
򐂰 Unified, easier storage management.
򐂰 Increased utilization rate of the installed storage capacity.
򐂰 Advanced Copy Services functions offered across storage systems from separate
vendors.
򐂰 Consider only one kind of multipath driver for attached hosts.

5.2 IBM SAN Volume Controller setup


In most SVC environments, where high performance and high availability requirements exist,
hosts are attached through a storage area network (SAN) by using the Fibre Channel
Protocol (FCP). Even though other supported SAN configurations are available, for example,
single fabric design, a SAN that consists of two independent fabrics is a preferred practice
and a commonly used setup. This design provides redundant paths and prevents unwanted
interference between fabrics if an incident affects one of the fabrics.

Starting with SVC 5.1, IP-based Small Computer System Interface (iSCSI) connectivity was
introduced to provide an alternative method to attach hosts through an Ethernet local area
network (LAN). However, any inter-node communication within the SVC clustered system,
between the SVC and its back-end storage subsystems, and between the SVC clustered
systems solely takes place through FC. For more information about SVC iSCSI connectivity,
see 5.3, “iSCSI” on page 177.

Starting with SVC 6.4, Fibre Channel over Ethernet (FCoE) is supported on models
2145-CG8 and newer. Only 10 GbE lossless Ethernet or faster is supported.

Redundant paths to volumes can be provided for both SAN-attached and iSCSI-attached
hosts. Figure 5-1 on page 171 shows the types of attachments that are supported by SVC
release 7.4.

170 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-1 SVC host attachment overview

5.2.1 Fibre Channel and SAN setup overview


Host attachment to the SVC through FC must be made through a SAN fabric because direct
host attachment to SVC nodes is not supported. For SVC configurations, using two redundant
SAN fabrics is a preferred practice. Therefore, we advise that you have each host equipped
with a minimum of two host bus adapters (HBAs) or at least a dual-port HBA with each HBA
connected to a SAN switch in either fabric.

The SVC imposes no particular limit on the actual distance between the SVC nodes and host
servers. Therefore, a server can be attached to an edge switch in a core-edge configuration
and the SVC cluster is at the core of the fabric.

For host attachment, the SVC supports up to three inter-switch link (ISL) hops in the fabric,
which means that the server to the SVC can be separated by up to five FC links, four of which
can be 10 km long (6.2 miles) if longwave small form-factor pluggables (SFPs) are used.

The zoning capabilities of the SAN switch are used to create three distinct zones. SVC 7.4
supports 2 GBps, 4 GBps, 8 GBps, or 16 Gbps FC fabric, depending on the hardware
platform and on the switch where the SVC is connected. In an environment where you have a
fabric with multiple-speed switches, the preferred practice is to connect the SVC and the disk
storage system to the switch that is operating at the highest speed.

The SVC nodes contain shortwave SFPs; therefore, they must be within 300 m (984.25 feet)
of the switch to which they attach. Therefore, the configuration that is shown in Figure 5-2 on
page 172 is supported.

Chapter 5. Host configuration 171


Figure 5-2 Example of host connectivity

Table 5-1 shows the fabric type that can be used for communicating between hosts, nodes,
and RAID storage systems. These fabric types can be used at the same time.

Table 5-1 SVC communication options


Communication type Host to SVC SVC to storage SVC to SVC

Fibre Channel (FC) SAN Yes Yes Yes

iSCSI (1 Gbps or 10 Gbps Ethernet) Yes No No

Fibre Channel over Ethernet (FCoE) Yes Yes Yes


(10 Gbps Ethernet)

In Figure 5-2, the optical distance between SVC Node 1 and Host 2 is slightly over 40 km
(24.85 miles).

To avoid latencies that lead to degraded performance, we suggest that you avoid ISL hops
whenever possible. That is, in an optimal setup, the servers connect to the same SAN switch
as the SVC nodes.

Remember the following limits when you are connecting host servers to an SVC:
򐂰 Up to 256 hosts per I/O Group are supported, which results in a total of 1,024 hosts per
cluster.
If the same host is connected to multiple I/O Groups of a cluster, it counts as a host in
each of these groups.
򐂰 A total of 512 distinct, configured host worldwide port names (WWPNs) are supported per
I/O Group.
This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is
generated for each iSCSI name) that are associated with all of the hosts that are
associated with a single I/O Group.

172 Implementing the IBM System Storage SAN Volume Controller V7.4
Access from a server to an SVC cluster through the SAN fabric is defined by using switch
zoning.

Consider the following rules for zoning hosts with the SVC:
򐂰 Homogeneous HBA port zones
Switch zones that contain HBAs must contain HBAs from similar host types and similar
HBAs in the same host. For example, AIX and Microsoft Windows NT hosts must be in
separate zones, and QLogic and Emulex adapters must also be in separate zones.

Important: A configuration that breaches this rule is unsupported because it can


introduce instability to the environment.

򐂰 HBA to SVC port zones


Place each host’s HBA in a separate zone with one or two SVC ports. If two ports exist,
use one from each node in the I/O Group. Do not place more than two SVC ports in a zone
with an HBA because this design results in more than the recommended number of paths,
as seen from the host multipath driver.

Number of paths: For n + 1 redundancy, use the following number of paths:


򐂰 With two HBA ports, zone HBA ports to SVC ports 1:2 for a total of four paths.
򐂰 With four HBA ports, zone HBA ports to SVC ports 1:1 for a total of four paths.

Optional (n+2 redundancy): With four HBA ports, zone HBA ports to SVC ports 1:2 for a
total of eight paths.

Here, we use the term HBA port to describe the SCSI initiator and SVC port to describe
the SCSI target.

򐂰 Maximum host paths per logical unit (LU)


For any volume, the number of paths through the SAN from the SVC nodes to a host must
not exceed eight. For most configurations, four paths to an I/O Group (four paths to each
volume that is provided by this I/O Group) are sufficient.

Important: The maximum number of host paths per LUN must not exceed eight.

򐂰 Balanced host load across HBA ports


To obtain the best performance from a host with multiple ports, ensure that each host port
is zoned with a separate group of SVC ports.
򐂰 Balanced host load across SVC ports
To obtain the best overall performance of the subsystem and to prevent overloading, the
workload to each SVC port must be equal. You can achieve this balance by zoning
approximately the same number of host ports to each SVC port.

Chapter 5. Host configuration 173


Figure 5-3 shows an overview of a configuration where servers contain two single-port HBAs
each and the configuration includes the following characteristics:
򐂰 Distribute the attached hosts equally between two logical sets per I/O Group, if possible.
Connect hosts from each set to the same group of SVC ports. This “port group” includes
exactly one port from each SVC node in the I/O Group. The zoning defines the correct
connections.
򐂰 The port groups are defined in the following manner:
– Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both
nodes, for example, N1/N2 of I/O Group zero.
– Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both
nodes of an I/O Group.
򐂰 You can create aliases for these port groups (per I/O Group):
– Fabric A: IOGRP0_PG1 → N1_P1;N2_P1,IOGRP0_PG2 → N1_P3;N2_P3
– Fabric B: IOGRP0_PG1 → N1_P4;N2_P4,IOGRP0_PG2 → N1_P2;N2_P2
򐂰 Create host zones by always using the host port WWPN and the PG1 alias for hosts in the
first host set. Always use the host port WWPN and the PG2 alias for hosts from the
second host set. If a host must be zoned to multiple I/O Groups, add the PG1 or PG2
aliases from the specific I/O Groups to the host zone.

The use of this schema provides four paths to one I/O Group for each host and helps to
maintain an equal distribution of host connections on SVC ports.

Figure 5-3 Overview of four-path host zoning

When possible, use the minimum number of paths that are necessary to achieve a sufficient
level of redundancy. For the SVC environment, no more than four paths per I/O Group are
required to accomplish this layout.

174 Implementing the IBM System Storage SAN Volume Controller V7.4
All paths must be managed by the multipath driver on the host side. If we assume that a
server is connected through four ports to the SVC, each volume is seen through eight paths.
With 125 volumes mapped to this server, the multipath driver must support handling up to
1,000 active paths (8 x 125).

For more configuration and operational information about the IBM Subsystem Device Driver
(SDD), see the Multipath Subsystem Device Driver User’s Guide, S7000303, which is
available at this website:
http://ibm.com/support/docview.wss?uid=ssg1S7000303

For hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 5-4. You can combine this schema with the previous four-path
zoning schema.

Figure 5-4 Overview of eight-path host zoning

Port designation recommendations


The port to local node communication is used for mirroring write cache and for metadata
exchange between nodes. The port to local node communication is critical to the stable
operation of the cluster. The DH8 nodes with their 8-port and 12-port configurations provide
an opportunity to isolate the port to local node traffic from other cluster traffic on dedicated
ports, therefore providing a level of protection against misbehaving devices and workloads
that can compromise the performance of the shared ports.

Additionally, isolating remote replication traffic on dedicated ports is beneficial and ensures
that problems that affect the cluster-to-cluster interconnection do not adversely affect the
ports on the primary cluster and therefore affect the performance of workloads running on the
primary cluster.

We recommend the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 5-2 on page 176.

Chapter 5. Host configuration 175


Important: Be careful when you perform zoning so that inter-node ports are not used for
host/storage traffic in the 8-port and 12-port configurations.

Table 5-2 Port designation recommendations for isolating traffic


Card/port SAN Four-port Eight-port nodes Twelve-port Twelve-port nodes, write
fabric nodes nodes data rate greater than
3 GBps for each I/O Group

C1P1 A Host/storage/ Host/storage Host/storage Inter-node


inter-node

C1P2 B Host/storage/ Host/storage Host/storage Inter-node


inter-node

C1P3 A Host/storage/ Host/storage Host/storage Host/storage


inter-node

C1P4 B Host/storage/ Host/storage Host/storage Host/storage


inter-node

C2P1 A Inter-node Inter-node Inter-node

C2P2 B Inter-node Inter-node Inter-node

C2P3 A Replication or Host/storage Host/storage


host/storage

C2P4 B Replication or Host/storage Host/storage


host/storage

C5P1 A Host/storage Host/storage

C5P2 B Host/storage Host/storage

C5P3 A Replication or Replication or host/storage


host/storage

C5P4 B Replication or Replication or host/storage


host/storage

localfcportmask 1111 110000 110000 110011

This recommendation provides the traffic isolation that you want and also simplifies migration
from existing configurations with only four ports, or even later migrations from 8-port or
12-port configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be
considered. However, these approaches do not appreciably increase availability of the
solution because the mean time between failures (MTBF) of the adapter is not significantly
less than that of the non-redundant node components.

Although alternate port mappings that spread traffic across HBAs can allow adapters to come
back online following a failure, they will not prevent a node from going offline temporarily to
reboot and attempt to isolate the failed adapter and then rejoin the cluster. Our
recommendation takes all these considerations into account with a view that the greater
complexity might lead to migration challenges in the future and the simpler approach is best.

176 Implementing the IBM System Storage SAN Volume Controller V7.4
5.3 iSCSI
The iSCSI protocol is a block-level protocol that encapsulates SCSI commands into TCP/IP
packets and therefore, uses an existing IP network instead of requiring the FC HBAs and
SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. The iSCSI
connectivity is a software feature that is provided by the SVC code.

The iSCSI-attached hosts can use a single network connection or multiple network
connections.

Restriction: Only hosts can iSCSI-attach to the SVC. The SVC back-end storage has to
use SAN.

Each SVC node is equipped with two onboard Ethernet network interface cards (NICs), which
can operate at a link speed of 10 Mbps, 100 Mbps, or 1000 Mbps. Both cards can be used to
carry iSCSI traffic. Each node’s NIC that is numbered 1 is used as the primary SVC cluster
management port. For optimal performance achievement, we advise that you use a 1 Gb
Ethernet connection between SVC-attached and iSCSI-attached hosts when the SVC node’s
onboard NICs are used.

Starting with the SVC 2145-CG8, an optional 10 Gbps 2-port Ethernet adapter (Feature Code
5700) is available. The required 10 Gbps shortwave SFPs are available as Feature Code
5711. If the 10 GbE option is installed, you cannot install any internal solid-state drives
(SSDs). The 10 GbE option is used solely for iSCSI traffic.

5.3.1 Initiators and targets


An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP
network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI
node.

You can use the following types of iSCSI initiators in host systems:
򐂰 Software initiator: Available for most operating systems, for example, AIX, Linux, and
Windows
򐂰 Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI HBA.

For more information about the supported operating systems for iSCSI host attachment and
the supported iSCSI HBAs, see the following websites:
򐂰 IBM SAN Volume Controller v7.4 Support Matrix:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
򐂰 IBM SAN Volume Controller Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp

An iSCSI target refers to a storage resource that is on an iSCSI server. It also refers to one of
potentially many instances of iSCSI nodes that are running on that server.

Chapter 5. Host configuration 177


5.3.2 iSCSI nodes
One or more iSCSI nodes exist within a network entity. The iSCSI node is accessible through
one or more network portals. A network portal is a component of a network entity that has a
TCP/IP network address and can be used by an iSCSI node.

An iSCSI node is identified by its unique iSCSI name and is referred to as an iSCSI qualified
name (IQN). The purpose of this name is for the identification of the node only, not for the
node’s address. In iSCSI, the name is separated from the addresses. This separation allows
multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the
same iSCSI node to use multiple addresses.

5.3.3 iSCSI qualified name


An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its
own IQN, which, by default, is in the following form:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

An iSCSI host in the SVC is defined by specifying its iSCSI initiator names. The following
example shows an IQN of a Windows server’s iSCSI software initiator:
iqn.1991-05.com.microsoft:itsoserver01

During the configuration of an iSCSI host in the SVC, you must specify the host’s initiator
IQNs.

An alias string can also be associated with an iSCSI node. The alias allows an organization to
associate a string with the iSCSI name. However, the alias string is not a substitute for the
iSCSI name.

Figure 5-5 shows an overview of the iSCSI implementation in the SVC.

Figure 5-5 SVC iSCSI overview

178 Implementing the IBM System Storage SAN Volume Controller V7.4
A host that is accessing SVC volumes through iSCSI connectivity uses one or more Ethernet
adapters or iSCSI HBAs to connect to the Ethernet network.

Both onboard Ethernet ports of an SVC node can be configured for iSCSI. If iSCSI is used for
host attachment, we advise that you dedicate Ethernet port one for the SVC management
and port two for iSCSI use. This way, port two can be connected to a separate network
segment or virtual LAN (VLAN) for iSCSI because the SVC does not support the use of VLAN
tagging to separate management and iSCSI traffic.

Note: Ethernet link aggregation (port trunking) or “channel bonding” for the SVC nodes’
Ethernet ports is not supported for the 1 Gbps ports.

For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, you
can define two IPv4 and two IPv6 addresses or iSCSI network portals.

5.3.4 iSCSI setup of the SAN Volume Controller and host server
You must perform the following procedure when you are setting up a host server for use as an
iSCSI initiator with the SVC volumes. The specific steps vary depending on the particular host
type and operating system that you use.

To configure a host, first select a software-based iSCSI initiator or a hardware-based iSCSI


initiator. For example, the software-based iSCSI initiator can be a Linux or Windows iSCSI
software initiator. The hardware-based iSCSI initiator can be an iSCSI HBA inside the host
server.

To set up your host server for use as an iSCSI software-based initiator with the SVC volumes,
complete the following steps. (The CLI is used in this example.)
1. Complete the following steps to set up your SVC cluster for iSCSI:
a. Select a set of IPv4 or IPv6 addresses for the Ethernet ports on the nodes that are in
the I/O Groups that use the iSCSI volumes.
b. Configure the node Ethernet ports on each SVC node in the clustered system by
running the cfgportip command.
c. Verify that you configured the node and the clustered system’s Ethernet ports correctly
by reviewing the output of the lsportip command and lssystemip command.
d. Use the mkvdisk command to create volumes on the SVC clustered system.
e. Use the mkhost command to create a host object on the SVC. It defines the host’s
iSCSI initiator to which the volumes are to be mapped.
f. Use the mkvdiskhostmap command to map the volume to the host object in the SVC.
2. Complete the following steps to set up your host server:
a. Ensure that you configured your IP interfaces on the server.
b. Ensure that your iSCSI HBA is ready to use, or install the software for the iSCSI
software-based initiator on the server, if needed.
c. On the host server, run the configuration methods for iSCSI so that the host server
iSCSI initiator logs in to the SVC clustered system and discovers the SVC volumes.
The host then creates host devices for the volumes.

After the host devices are created, you can use them with your host applications.

Chapter 5. Host configuration 179


5.3.5 Volume discovery
Hosts can discover volumes through one of the following three mechanisms:
򐂰 Internet Storage Name Service (iSNS)
The SVC can register with an iSNS name server; the IP address of this server is set by
using the chsystem command. A host can then query the iSNS server for available iSCSI
targets.
򐂰 Service Location Protocol (SLP)
The SVC node runs an SLP daemon, which responds to host requests. This daemon
reports the available services on the node. One service is the CIM object manager
(CIMOM), which runs on the configuration node; iSCSI I/O service now also can be
reported.
򐂰 SCSI Send Target request
The host can also send a Send Target request by using the iSCSI protocol to the iSCSI
TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI
targets before a discovery can be started.

5.3.6 Authentication
The authentication of hosts is optional; by default, it is disabled. The user can choose to
enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication,
which involves sharing a CHAP secret between the cluster and the host. If the correct key is
not provided by the host, the SVC does not allow it to perform I/O to volumes. Also, you can
assign a CHAP secret to the cluster.

5.3.7 Target failover


A new feature with iSCSI is the option to move iSCSI target IP addresses between the SVC
nodes in an I/O Group. IP addresses are moved only from one node to its partner node if a
node goes through a planned or unplanned restart. If the Ethernet link to the SVC clustered
system fails due to a cause outside of the SVC (such as the cable being disconnected or the
Ethernet router failing), the SVC makes no attempt to fail over an IP address to restore IP
access to the cluster. To enable the validation of the Ethernet access to the nodes, it
responds to ping with the standard one-per-second rate without frame loss.

A concept that is used for handling the iSCSI IP address failover is called a clustered Ethernet
port. A clustered Ethernet port consists of one physical Ethernet port on each node in the
cluster. The clustered Ethernet port contains configuration settings that are shared by all of
these ports.

Figure 5-6 on page 181 shows an example of an iSCSI target node failover. This example
provides a simplified overview of what happens during a planned or unplanned node restart in
an SVC I/O Group. The example refers to the SVC nodes with no optional 10 GbE iSCSI
adapter installed.

180 Implementing the IBM System Storage SAN Volume Controller V7.4
The following numbered comments relate to the numbers in Figure 5-6:
1. During normal operation, one iSCSI node target node instance is running on each SVC
node. All of the IP addresses (IPv4/IPv6) that belong to this iSCSI target (including the
management addresses if the node acts as the configuration node) are presented on the
two ports (P1/P2) of a node.
2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal
(IPv4/IPv6) IP addresses that are defined on Port1/Port2 and the management
(IPv4/IPv6) IP addresses (if N1 acted as the configuration node), fail over to Port1/Port2 of
the partner node within the I/O Group, node N2. An iSCSI initiator that is running on a
server runs a reconnect to its iSCSI target, that is, the same IP addresses that are
presented now by a new node of the SVC cluster.
3. When the node (N1) finishes its restart, the iSCSI target node (including its IP addresses)
that is running on N2 fails back to N1. Again, the iSCSI initiator that is running on a server
runs a reconnect to its iSCSI target. The management addresses do not fail back. N2
remains in the role of the configuration node for this cluster.

Figure 5-6 iSCSI node failover scenario

5.3.8 Host failover


From a host perspective, a multipathing driver (MPIO) is not required to handle an SVC node
failover. In an SVC node restart, the host reconnects to the IP addresses of the iSCSI target
node that reappear after several seconds on the ports of the partner node.

A host multipathing driver for iSCSI is required in the following situations:


򐂰 To protect a host from network link failures, including port failures on the SVC nodes.
򐂰 To protect a host from an HBA failure (if two HBAs are in use).
򐂰 To protect a host from network failures, if the host is connected through two HBAs to two
separate networks.
򐂰 To provide load balancing on the server’s HBA and the network links.

Chapter 5. Host configuration 181


The commands for the configuration of the iSCSI IP addresses were separated from the
configuration of the cluster IP addresses.

The following commands are new commands that are used for managing iSCSI IP addresses:
򐂰 The lsportip command lists the iSCSI IP addresses that are assigned for each port on
each node in the cluster.
򐂰 The cfgportip command assigns an IP address to each node’s Ethernet port for iSCSI
I/O.

The following commands are new commands that are used for managing the cluster IP
addresses:
򐂰 The lssystemip command returns a list of the cluster management IP addresses that are
configured for each port.
򐂰 The chsystemip command modifies the IP configuration parameters for the cluster.

The parameters for remote services (SSH and web services) remain associated with the
cluster object. During an SVC code upgrade, the configuration settings for the clustered
system are applied to the node Ethernet port 1.

For iSCSI-based access, the use of redundant network connections and separating iSCSI
traffic by using a dedicated network or virtual LAN (VLAN) prevents any NIC, switch, or target
port failure from compromising the host server’s access to the volumes.

Because both onboard Ethernet ports of an SVC node can be configured for iSCSI, we
advise that you dedicate Ethernet port 1 for SVC management and port 2 for iSCSI usage. By
using this approach, port 2 can be connected to a dedicated network segment or VLAN for
iSCSI. Because the SVC does not support the use of VLAN tagging to separate management
and iSCSI traffic, you can assign the correct LAN switch port to a dedicated VLAN to separate
SVC management and iSCSI traffic.

5.4 AIX-specific information


The following section describes specific information that relates to the connection of
AIX-based hosts in an SVC environment.

AIX-specific information: In this section, the IBM System p® information applies to all
AIX hosts that are listed on the SVC interoperability support website, including
IBM System i partitions and IBM JS blades.

5.4.1 Configuring the AIX host


The following steps are required to attach the SVC volumes to an AIX host:
1. Install the HBAs in the AIX host system.
2. Ensure that you installed the correct operating systems and version levels on your host,
including any updates and authorized program analysis reports (APARs) for the operating
system.
3. Connect the AIX host system to the FC switches.
4. Configure the FC switch zoning.

182 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Install the 2145 host attachment support package. For more information, see 5.4.5,
“Installing the 2145 host attachment support package” on page 185.
6. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM).
7. Perform the logical configuration on the SVC to define the host, volumes, and host
mapping.
8. Run the cfgmgr command to discover and configure the SVC volumes.

The following sections describe the current support information.

Important: It is vital that you regularly check the listed websites for any updates.

5.4.2 Operating system versions and maintenance levels


At the time of this writing, the SVC supports AIX levels V5.3 - V7.1.

The SVC supports the following AIX levels:


򐂰 AIX V5.3 Technology Level (TL) 12 (based on the extended service contract)
򐂰 AIX V6.1
򐂰 AIX V7.1

For more information and device driver support, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html

5.4.3 HBAs for IBM System p hosts


Ensure that your IBM System p AIX hosts contain the supported HBAs. For more information
about current interoperability, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html

Important: The maximum number of FC ports that are supported in a single host (or
logical partition) is four. These ports can be four single-port adapters or two dual-port
adapters or a combination if the maximum number of ports that attach to the SVC does not
exceed four.

5.4.4 Configuring fast fail and dynamic tracking


For hosts that are running AIX V5.3 or later operating systems, enable both fast fail and
dynamic tracking.

Complete the following steps to configure your host system to use the fast fail and dynamic
tracking attributes:
1. Run the following command to set the FC SCSI I/O Controller Protocol Device to each
adapter:
chdev -l fscsi0 -a fc_err_recov=fast_fail
This command was for adapter fscsi0. Example 5-1 on page 184 shows the command for
both adapters on a system that is running IBM AIX V6.1.

Chapter 5. Host configuration 183


Example 5-1 Enable fast fail
#chdev -l fscsi0 -a fc_err_recov=fast_fail
fscsi0 changed
#chdev -l fscsi1 -a fc_err_recov=fast_fail
fscsi1 changed

2. Run the following command to enable dynamic tracking for each FC device:
chdev -l fscsi0 -a dyntrk=yes
This example command was for adapter fscsi0. Example 5-2 shows the command for
both adapters in IBM AIX V6.1.

Example 5-2 Enable dynamic tracking


#chdev -l fscsi0 -a dyntrk=yes
fscsi0 changed
#chdev -l fscsi1 -a dyntrk=yes
fscsi1 changed

Note: The fast fail and dynamic tracking attributes do not persist through an adapter
delete and reconfigure operation. Therefore, if the adapters are deleted and then
configured back into the system, these attributes are lost and must be reapplied.

Host adapter configuration settings


You can display the availability of installed host adapters by using the command that is shown
in Example 5-3.

Example 5-3 FC host adapter availability


#lsdev -Cc adapter |grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC Adapter

You can display the WWPN, with other attributes, including the firmware level, by using the
command that is shown in Example 5-4. The WWPN is represented as the Network Address.

Example 5-4 FC host adapter settings and WWPN

#lscfg -vpl fcs0


fcs0 U0.1-P2-I4/Q1 FC Adapter

Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951

184 Implementing the IBM System Storage SAN Volume Controller V7.4
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1

5.4.5 Installing the 2145 host attachment support package


To configure SVC volumes to an AIX host with the correct device type of 2145, you must
install the 2145 host attachment support file set before you run the cfgmgr command.
Running the cfgmgr command before you install the host attachment support file set results in
the LUNs being configured as “Other SCSI Disk Drives” and they are not recognized by the
SDDPCM. To correct the device type, you must delete the hdisks by using the rmdev -dl
hdiskX command and then by rerunning the cfgmgr command.

Complete the following steps to install the host attachment support package:
1. See the following website:
http://www.ibm.com/servers/storage/support/software/sdd/downloading.html
2. Select Host Attachment for SDDPCM on AIX.
3. Download the appropriate host attachment package archive for your AIX version; the file
set that is contained in the package is devices.fcp.disk.ibm.mpio.rte.
4. Follow the instructions that are provided on the website and the readme files to install the
script.

5.4.6 Subsystem Device Driver Path Control Module


Subsystem Device Driver Path Control Module (SDDPCM) is a loadable path control module
for supported storage devices to supply path management functions and error recovery
algorithms. When the supported storage devices are configured as Multipath I/O (MPIO)
devices, SDDPCM is loaded as part of the AIX MPIO FCP or AIX MPIO serial-attached SCSI
(SAS) device driver during the configuration.

The AIX MPIO device driver automatically discovers, configures, and makes available all
storage device paths. SDDPCM then manages these paths to provide the following functions:
򐂰 High availability and load balancing of storage I/O
򐂰 Automatic path-failover protection
򐂰 Concurrent download of supported storage devices’ licensed machine code
򐂰 Prevention of a single-point failure

The AIX MPIO device driver with SDDPCM enhances the data availability and I/O load
balancing of SVC volumes.

SDD: For AIX hosts, use the SDDPCM as the multipath software over the existing SDD.
Although SDD is still supported, a description of SDD is beyond the scope of this
publication. Since AIX version 6.1, SDD is no longer available for multipathing.

Chapter 5. Host configuration 185


SDDPCM installation
Download the appropriate version of SDDPCM and install it by using the standard AIX
installation procedure. The latest SDDPCM software versions are available at the following
website:
http://ibm.com/support/entry/portal/Downloads/Hardware/System_Storage/Storage_soft
ware/Other_software_products/System_Storage_Multipath_Subsystem_Device_Driver/

Check the driver readme file to ensure that your AIX system meets all prerequisites.

Example 5-5 shows the appropriate version of SDDPCM that is downloaded into the
/tmp/sddpcm directory. From here, we extract it and run the inutoc command, which
generates a dot.toc file that is needed by the installp command before SDDPCM is
installed. Finally, we start the installp command, which installs SDDPCM onto this AIX host.

Example 5-5 Installing SDDPCM on AIX


# ls -l
total 3232
-rw-r----- 1 root system 1648640 Jul 15 13:24
devices.sddpcm.61.rte.tar
# tar -tvf devices.sddpcm.61.rte.tar
-rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte
# tar -xvf devices.sddpcm.61.rte.tar
x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks.
# inutoc .
# ls -l
total 6432
-rw-r--r-- 1 root system 531 Jul 15 13:25 .toc
-rw-r----- 1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte
-rw-r----- 1 root system 1648640 Jul 15 13:24
devices.sddpcm.61.rte.tar
# installp -ac -d . all

Example 5-6 shows the lslpp command that can be used to check the version of SDDPCM
that is installed.

Example 5-6 Checking the SDDPCM device driver


# lslpp -l | grep sddpcm
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61

For more information about how to enable the SDDPCM web interface, see 5.12, “Using the
SDDDSM, SDDPCM, and SDD web interface” on page 238.

5.4.7 Configuring the assigned volume by using SDDPCM


We use an AIX host with the host name Atlantic to demonstrate attaching SVC volumes to an
AIX host. Example 5-7 on page 187 shows the host configuration before the SVC volumes
are configured. The lspv output shows the existing hdisks and the lsvg output shows the
existing volume group (VG).

186 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-7 Status of AIX host system Atlantic

# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
# lsvg
rootvg

Identifying the WWPNs of the host adapter ports


Example 5-8 shows how to use the lscfg commands to list the WWPNs for all installed
adapters. We use the WWPNs later for mapping the SVC volumes.

Example 5-8 HBA information for host Atlantic


# lscfg -vl fcs* |egrep “fcs|Network”
fcs1 U0.1-P2-I4/Q1 FC Adapter
Network Address.............10000000C932A865
Physical Location: U0.1-P2-I4/Q1
fcs2 U0.1-P2-I5/Q1 FC Adapter
Network Address.............10000000C94C8C1C

Displaying the SAN Volume Controller configuration


You can use the SVC CLI to display the host configuration on the SVC and to validate the
physical access from the host to the SVC. Example 5-9 on page 188 shows the use of the
lshost and lshostvdiskmap commands to obtain the following information:
򐂰 We confirm that a host definition was correctly defined for the host Atlantic.
򐂰 The WWPNs that are listed in Example 5-8 are logged in with two logins each.
򐂰 Atlantic has three volumes that are assigned to each WWPN, and the volume serial
numbers are listed.

Chapter 5. Host configuration 187


Example 5-9 SVC definitions for host system Atlantic
IBM_2145:ITSO_SVC1:admin>svcinfo lshost Atlantic
id 8
name Atlantic
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C94C8C1C
node_logged_in_count 2
state active
WWPN 10000000C932A865
node_logged_in_count 2
state active

IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Atlantic


id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
8 Atlantic 0 14 Atlantic0001
10000000C94C8C1C 6005076801A180E90800000000000060
8 Atlantic 1 22 Atlantic0002
10000000C94C8C1C 6005076801A180E90800000000000061
8 Atlantic 2 23 Atlantic0003
10000000C94C8C1C 6005076801A180E90800000000000062

Discovering and configuring LUNs


The cfgmgr command discovers the new LUNs and configures them into AIX. The following
command probes the devices on the adapters individually:
# cfgmgr -l fcs1
# cfgmgr -l fcs2

The following command probes the devices sequentially across all installed adapters:

# cfgmgr -vS

The lsdev command lists the three newly configured hdisks that are represented as
MPIO FC 2145 devices, as shown in Example 5-10.

Example 5-10 Volumes from the SVC


# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 MPIO FC 2145
hdisk4 Available 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145

Now, you can use the mkvg command to create a VG with the three newly configured hdisks,
as shown in Example 5-11 on page 189.

188 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-11 Running the mkvg command
# mkvg -y itsoaixvg hdisk3
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
# mkvg -y itsoaixvg1 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg1
# mkvg -y itsoaixvg2 hdisk5
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg2

The lspv output now shows the new VG label on each of the hdisks that were included in the
VGs, as shown in Example 5-12.

Example 5-12 Showing the vpath assignment into the VG


# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
hdisk3 0009cdca28b589f5 itsoaixvg active
hdisk4 0009cdca28b87866 itsoaixvg1 active
hdisk5 0009cdca28b8ad5b itsoaixvg2 active

5.4.8 Using SDDPCM


You administer the SDDPM by using the pcmpath command. You use this command to
perform all administrative functions, such as displaying and changing the path state. The
pcmpath query adapter command displays the current state of the adapters. In
Example 5-13, we can see the status that both adapters show as optimal with
State=NORMAL and Mode=ACTIVE.

Example 5-13 SDDPCM commands that are used to check the availability of the adapters
# pcmpath query adapter

Active Adapters :2
Adpt# Name State Mode Select Errors Paths Active
0 fscsi1 NORMAL ACTIVE 407 0 6 6
1 fscsi2 NORMAL ACTIVE 425 0 6 6

The pcmpath query device command displays the current state of adapters. Example 5-14
shows the path’s State and Mode for each of the defined hdisks. Both adapters show the
optimal status of State=Open and Mode=Normal. Additionally, an asterisk (*) that is displayed
next to a path indicates an inactive path that is configured to the non-preferred SVC node.

Example 5-14 SDDPCM commands that are used to check the availability of the devices

# pcmpath query device


Total Devices : 3
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000060
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 152 0
1* fscsi1/path1 OPEN NORMAL 48 0

Chapter 5. Host configuration 189


2* fscsi2/path2 OPEN NORMAL 48 0
3 fscsi2/path3 OPEN NORMAL 160 0

DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance


SERIAL: 6005076801A180E90800000000000061
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0* fscsi1/path0 OPEN NORMAL 37 0
1 fscsi1/path1 OPEN NORMAL 66 0
2 fscsi2/path2 OPEN NORMAL 71 0
3* fscsi2/path3 OPEN NORMAL 38 0

DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance


SERIAL: 6005076801A180E90800000000000062
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 66 0
1* fscsi1/path1 OPEN NORMAL 38 0
2* fscsi2/path2 OPEN NORMAL 38 0
3 fscsi2/path3 OPEN NORMAL 70 0

5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg VG is created with hdisk3. A logical volume is created by using the VG. Then,
the testlv1 file system is created and mounted, as shown in Example 5-15.

Example 5-15 Host system new VG and file system configuration


# lsvg -o
itsoaixvg2
itsoaixvg1
itsoaixvg
rootvg
# crfs -v jfs2 -g itsoaixvg -a size=3G -m /itsoaixvg -p rw -a agblksize=4096
File system created successfully.
3145428 kilobytes total disk space.
New File System size is 6291456
# lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv00 jfs2log 1 1 1 closed/syncd N/A
fslv00 jfs2 384 384 1 closed/syncd /itsoaixvg

5.4.10 Expanding an AIX volume


AIX supports dynamic volume expansion starting at IBM AIX 5L™ Version 5.2. By using this
capability, a volume’s capacity can be increased by the storage subsystem while the volumes
are actively used by the host and applications.

The following restrictions apply:


򐂰 The volume cannot belong to a concurrent-capable VG.
򐂰 The volume cannot belong to a FlashCopy, Metro Mirror, or Global Mirror relationship.

190 Implementing the IBM System Storage SAN Volume Controller V7.4
The following steps show how to expand a volume on an AIX host when the volume is on the
SVC:
1. Display the current size of the SVC volume by using the SVC CLI command lsvdisk
<VDisk_name>. The capacity of the volume, as seen by the host, is displayed in the
capacity field of the lsvdisk output in GBs.
2. The corresponding AIX hdisk can be identified by matching the vdisk_UID from the
lsvdisk output with the SERIAL field of the pcmpath query device output.
3. Display the capacity that is configured in AIX by using the lspv hdisk command. The
capacity is shown in the TOTAL PPs field in MBs.
4. To expand the capacity of the SVC volume, use the expandvdisksize command.
5. After the capacity of the volume is expanded, AIX must update its configured capacity. To
start the capacity update on AIX, use the chvg -g vg_name command, where vg_name is
the VG in which the expanded volume is found.
If AIX does not return any messages, the command was successful and the volume
changes in this VG were saved.
If AIX cannot see any changes in the volumes, it returns an explanatory message.
6. Display the new capacity that was configured by AIX by using the lspv hdisk command.
The capacity is shown in the TOTAL PPs field in MBs.

5.4.11 Running SAN Volume Controller commands from an AIX host system
To run CLI commands, you must install and prepare the SSH client system on the AIX host
system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also
need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for IBM Power
Systems™:
http://ibm.com/systems/power/software/aix/linux/toolbox/download.html

The AIX installation images from IBM developerWorks are available at this website:
http://sourceforge.net/projects/openssh-aix

Complete the following steps:


1. To generate the key files on AIX, run the following command:
ssh-keygen -t rsa -f filename
The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for
rsa2 is only rsa. For rsa1, the type must be rsa1. When you are creating the key to the
SVC, use type rsa2. The -f parameter specifies the file names of the private and public
keys on the AIX server (the public key has the extension .pub after the file name).
2. Install the public key on the SVC by using the Master Console. Copy the public key to the
Master Console and install the key to the SVC, as described in Chapter 4, “SAN Volume
Controller initial configuration” on page 121.
3. On the AIX server, ensure that the private key and the public key are in the .ssh directory
and in the home directory of the user.
4. To connect to the SVC and use a CLI session from the AIX host, run the following
command:
ssh -l admin -i filename svc

Chapter 5. Host configuration 191


5. You can also run the commands directly on the AIX host, which is useful when you are
making scripts. To run the commands directly on the AIX host, add the SVC commands to
the previous command. For example, to list the hosts that are defined on the SVC, enter
the following command:
ssh -l admin -i filename svc svcinfo lshost
In this command, -l admin is the user name that is used to log in to the SVC, -i filename
is the filename of the private key that is generated, and svc is the host name or IP address
of the SVC.

5.5 Microsoft Windows information


In the following sections, we describe specific information about the connection of hosts that
are based on Windows to the SVC environment.

5.5.1 Configuring Windows Server 2008 and 2012 hosts


This section provides an overview of the requirements for attaching an SVC to a host that is
running Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012. You must
install the IBM Subsystem Device Driver Device Specific Module (SDDDSM) multipath driver
to make the Windows server capable of handling volumes that are presented by the SVC.

Important: With Windows 2012, you can use native Microsoft device drivers, but we
strongly advise that you install IBM SDDDSM drivers.

Before you attach the SVC to your host, ensure that all of the following requirements are
fulfilled:
򐂰 Check all prerequisites that are provided in section 2.0 of the SDDSM readme file.
򐂰 Check the LUN limitations for your host system. Ensure that enough FC adapters are
installed in the server to handle the total number of LUNs that you want to attach.

5.5.2 Configuring Windows


To configure the Windows hosts, complete the following steps:
1. Ensure that the latest OS service pack and hot fixes are applied to your Windows server
system.
2. Use the latest supported firmware and driver levels on your host system.
3. Install the HBA or HBAs on the Windows server, as described in 5.5.4, “Installing and
configuring the host adapter” on page 193.
4. Connect the Windows Server FC host adapters to the switches.
5. Configure the switches (zoning).
6. Install the FC host adapter driver, as described in 5.5.3, “Hardware lists, device driver,
HBAs, and firmware levels” on page 193.
7. Configure the HBA for hosts that are running Windows, as described in 5.5.4, “Installing
and configuring the host adapter” on page 193.
8. Check the HBA driver readme file for the required Windows registry settings, as described
in 5.5.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 193.

192 Implementing the IBM System Storage SAN Volume Controller V7.4
9. Check the disk timeout on Windows Server, as described in 5.5.5, “Changing the disk
timeout on Windows Server” on page 193.
10.Install and configure SDDDSM.
11.Restart the Windows Server host system.
12.Configure the host, volumes, and host mapping in the SVC.
13.Use Rescan disk in Computer Management of the Windows Server to discover the
volumes that were created on the SVC.

5.5.3 Hardware lists, device driver, HBAs, and firmware levels


For more information about the supported hardware, device driver, and firmware, see this
website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html

On this page, browse to section V7.4.x, select Supported Hardware, Device Driver,
Firmware and Recommended Software Levels, and then search for Windows.

At this website, you also can find the hardware list for supported HBAs and the driver levels
for Windows. Check the supported firmware and driver level for your HBA and follow the
manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA.
Most manufacturers’ driver readme files list the instructions for the Windows registry
parameters that must be set for the HBA driver.

5.5.4 Installing and configuring the host adapter


Install the host adapters in your system. See the manufacturer’s instructions for the
installation and configuration of the HBAs.

Also, check the documentation that is provided for the server system for the installation
guidelines of FC HBAs regarding the installation in certain PCI(e) slots, and so on.

The detailed configuration settings that you must make for the various vendors’ FC HBAs are
available in the SVC Information Center by selecting Installing → Host attachment → Fibre
Channel host attachments → Hosts running the Microsoft Windows Server operating
system.

5.5.5 Changing the disk timeout on Windows Server


This section describes how to change the disk I/O timeout value on Windows Server 2008,
Windows Server 2008 R2, and Windows Server 2012 systems.

On your Windows Server hosts, complete the following steps to change the disk I/O timeout
value to 60 in the Windows registry:
1. In Windows, click Start, and then select Run.
2. In the dialog text box, enter regedit and press Enter.
3. In the registry browsing tool, locate the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.
4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the
value to 60, as shown in Figure 5-7 on page 194.

Chapter 5. Host configuration 193


Figure 5-7 Regedit

5.5.6 Installing the SDDDSM multipath driver on Windows


This section describes how to install the SDDDSM driver on a Windows Server 2008 R2 host
and Windows Server 2012.

Windows Server 2012, Windows Server 2008 (R2), and MPIO


Microsoft Multipath I/O (MPIO) is a generic multipath driver that is provided by Microsoft,
which does not form a complete solution. It works with device-specific modules (DSMs),
which usually are provided by the vendor of the storage subsystem. This design allows the
parallel operation of multiple vendors’ storage systems on the same host without interfering
with each other because the MPIO instance interacts only with that storage system for which
the DSM is provided.

MPIO is not installed with the Windows operating system, by default. Instead, storage
vendors must pack the MPIO drivers with their own DSMs. IBM SDDDSM is the IBM multipath
I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is
designed specifically to support IBM storage devices on Windows Server 2008 (R2) and
Windows 2012 servers.

The intention of MPIO is to achieve better integration of multipath storage with the operating
system. It also allows the use of multipathing in the SAN infrastructure during the boot
process for SAN boot hosts.

SDDDSM for IBM SAN Volume Controller


The SDDDSM installation is a package for the SVC device for the Windows Server 2008 (R2)
and Windows Server 2012 operating systems. Together with MPIO, SDDDSM is designed to
support the multipath configuration environments in the SVC. SDDDSM is in a host system
with the native disk device driver and provides the following functions:
򐂰 Enhanced data availability
򐂰 Dynamic I/O load-balancing across multiple paths
򐂰 Automatic path failover protection
򐂰 Enabled concurrent firmware upgrade for the storage system
򐂰 Path-selection policies for the host system

No SDDDSM support exists for Windows Server 2000 because SDDDSM requires the
STORPORT version of the HBA device drivers. Table 5-3 on page 195 lists the SDDDSM
driver levels that are supported at the time of this writing.

194 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 5-3 Currently supported SDDDSM driver levels
Windows operating system SDD level

Windows Server 2012 (x64) 2.4.3.4-4

Windows Server 2008 R2 (x64) 2.4.3.4-4

Windows Server 2008 (32-bit)/Windows Server 2008 (x64) 2.4.3.4-4

For more information about the levels that are available, see this website:
http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM

To download SDDDSM, see this website:


http://ibm.com/support/docview.wss?uid=ssg1S4000350#SVC

After you download the appropriate archive (.zip file) from this URL, extract it to your local
hard disk and start setup.exe to install SDDDSM. A command prompt window opens, as
shown in Figure 5-8. Confirm the installation by entering Y.

Figure 5-8 SDDDSM installation

After the setup completes, enter Y again to confirm the reboot request, as shown in
Figure 5-9.

Figure 5-9 Reboot system after installation

Chapter 5. Host configuration 195


After the reboot, the SDDDSM installation is complete. You can verify the installation
completion in Device Manager because the SDDDSM device appears (as shown in
Figure 5-10) and the SDDDSM tools are installed, as shown in Figure 5-11.

Figure 5-10 SDDDSM installation

The SDDDSM tools are installed, as shown in Figure 5-11.

Figure 5-11 SDDDSM installation

196 Implementing the IBM System Storage SAN Volume Controller V7.4
5.5.7 Attaching SVC volumes to Microsoft Windows Server 2008 R2 and to
Windows Server 2012
Create the volumes on the SVC and map them to the Windows Server 2008 R2 or 2012 host.

In this example, we mapped three SVC disks to the Windows Server 2008 R2 host that is
named Diomede, as shown in Example 5-16.

Example 5-16 SVC host mapping to host Diomede


IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Diomede
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
0 Diomede 0 20 Diomede_0001 210000E08B0541BC
6005076801A180E9080000000000002B
0 Diomede 1 21 Diomede_0002 210000E08B0541BC
6005076801A180E9080000000000002C
0 Diomede 2 22 Diomede_0003 210000E08B0541BC
6005076801A180E9080000000000002D

Complete the following steps to use the devices on your Windows Server 2008 R2 host:
1. Click Start → Run.
2. Run the diskmgmt.msc command, and then click OK. The Disk Management window
opens.
3. Select Action → Rescan Disks, as shown in Figure 5-12.

Figure 5-12 Windows Server 2008 R2: Rescan disks

4. The SVC disks now appear in the Disk Management window, as shown in Figure 5-13 on
page 198.

Chapter 5. Host configuration 197


Figure 5-13 Windows Server 2008 R2 Disk Management window

After you assign the SVC disks, they are also available in Device Manager. The three
assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices
in the Device Manager, as shown in Figure 5-14.

Figure 5-14 Windows Server 2008 R2 Device Manager

5. To check that the disks are available, select Start → All Programs → Subsystem Device
Driver DSM, and then click Subsystem Device Driver DSM, as shown in Figure 5-15 on
page 199. The SDDDSM command-line utility appears.

198 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-15 Windows Server 2008 R2 Subsystem Device Driver DSM utility

6. Run the datapath query device command and press Enter. This command displays all of
the disks and the available paths, including their states, as shown in Example 5-17.

Example 5-17 Windows Server 2008 R2 SDDDSM command-line utility


Microsoft Windows [Version 6.0.6001]
Copyright (c) 2006 Microsoft Corporation. All rights reserved.

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000002B
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000002C
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000002D
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

C:\Program Files\IBM\SDDDSM>

Chapter 5. Host configuration 199


SAN zoning: When the SAN zoning guidance is followed, we see this result, which
uses one volume and a host with two HBAs: (number of volumes) x (number of paths
per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.

7. Right-click the disk in Disk Management and then select Online to place the disk online,
as shown in Figure 5-16.

Figure 5-16 Windows Server 2008 R2: Place disk online

8. Repeat step 7 for all of your attached SVC disks.


9. Right-click one disk again and select Initialize Disk, as shown in Figure 5-17.

Figure 5-17 Windows Server 2008 R2: Initialize Disk

10.Mark all of the disks that you want to initialize and then click OK, as shown in Figure 5-18
on page 201.

200 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-18 Windows Server 2008 R2: Initialize Disk

11.Right-click the deallocated disk space and then select New Simple Volume, as shown in
Figure 5-19.

Figure 5-19 Windows Server 2008 R2: New Simple Volume

12.The New Simple Volume Wizard opens. Click Next.


13.Enter a disk size and then click Next, as shown in Figure 5-20.

Figure 5-20 Windows Server 2008 R2: New Simple Volume

Chapter 5. Host configuration 201


14.Assign a drive letter and then click Next, as shown in Figure 5-21.

Figure 5-21 Windows Server 2008 R2: New Simple Volume

15.Enter a volume label and then click Next, as shown in Figure 5-22.

Figure 5-22 Windows Server 2008 R2: New Simple Volume

16.Click Finish. Repeat steps 9 - 16 for every SVC disk on your host system (Figure 5-23 on
page 203).

202 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-23 Windows Server 2008 R2: Disk Management

5.5.8 Extending a volume


Using the SVC and Windows Server 2003 provides the capability to extend volumes while
they are in use.

You can expand a volume in the SVC cluster, even if it is mapped to a host. Certain operating
systems, such as Windows Server since version 2000, can handle the volumes that are
expanded even if the host has applications running.

A volume, which is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on


the SVC, cannot be expanded unless the host mapping is removed. Therefore, the
FlashCopy, Metro Mirror, or Global Mirror on that volume must be stopped before the volume
can be expanded.

Important: If you want to expand a logical drive in an extended partition in Windows


Server 2003, apply the Hotfix from KB841650, which is available from the Microsoft
Knowledge Base at this website:
http://support.microsoft.com/kb/841650/

Use the updated DiskPart version for Windows Server 2003, which is available from the
Microsoft Knowledge Base at this website:
http://support.microsoft.com/kb/923076/

If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all but one MSCS cluster node. Also, you must stop the applications in the resource that
access the volume to be expanded before the volume is expanded. Applications that are
running in other resources can continue to run. After the volume is expanded, start the
applications and the resource, and then restart the other nodes in the MSCS.

To expand a volume in use on a Windows Server host, you use the Windows DiskPart utility.

Chapter 5. Host configuration 203


To start DiskPart, select Start → Run, and enter DiskPart.

DiskPart was developed by Microsoft to ease the administration of storage on Windows hosts.
DiskPart is a command-line interface (CLI) that you can use to manage disks, partitions, and
volumes by using scripts or direct input on the command line. You can list disks and volumes,
select them, and after selecting them, get more detailed information, create partitions, extend
volumes, and so on. For more information about DiskPart, see this website:
http://www.microsoft.com
For more information about expanding the partitions of a cluster-shared disk, see this
website:
http://support.microsoft.com/kb/304736

Next, we show an example of how to expand a volume from the SVC on a Windows Server
2008 host.

To list a volume size, use the svcinfo lsvdisk <VDisk_name> command. This command
provides the volume size information for the Senegal_bas0001 volume before expanding the
volume.
Here, we can see that the capacity is 10 GB, and we can see the value of the vdisk_UID. To
see on which vpath this volume is on the Windows Server 2008 host, we use the datapath
query device SDD command on the Windows host (Figure 5-24).

We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows


host (Figure 5-24) matches the volume ID of Senegal_bas0001.

To see the size of the volume on the Windows host, we use Disk Management, as shown in
Figure 5-24.

Figure 5-24 Windows Server 2008: Disk Management

This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use
the svctask expandvdisksize command to increase the capacity on the volume. In this
example, we expand the volume by 1 GB, as shown in Example 5-18 on page 205.

204 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-18 svctask expandvdisksize command
IBM_2145:ITSO_SVC1:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001

IBM_2145:ITSO_SVC1:admin>svcinfo lsvdisk Senegal_bas0001


id 7
name Senegal_bas0001
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
capacity 11.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801A180E9080000000000000F
throttling 0
preferred_node_id 3
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 11.00GB
real_capacity 11.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize

To check that the volume was expanded, we use the svcinfo lsvdisk command. In
Example 5-18, we can see that the Senegal_bas0001 volume capacity was expanded to
11 GB.

After a Disk Rescan in Windows is performed, you can see the new deallocated space in
Windows Disk Management, as shown in Figure 5-25 on page 206.

Chapter 5. Host configuration 205


Figure 5-25 Expanded volume in Disk Management

This window shows that Disk1 now has 1 GB deallocated new capacity. To make this capacity
available for the file system, use the following commands, as shown in Example 5-19:
򐂰 diskpart: Starts DiskPart in a DOS prompt
򐂰 list volume: Shows you all available volumes
򐂰 select volume: Selects the volume to expand
򐂰 detail volume: Displays details for the selected volume, including the deallocated
capacity
򐂰 extend: Extends the volume to the available deallocated space

Example 5-19 Using the diskpart command


C:\>diskpart

Microsoft DiskPart version 5.2.3790.3959


Copyright (C) 1999-2001 Microsoft Corporation.
On computer: SENEGAL

DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 C NTFS Partition 75 GB Healthy System
Volume 1 S SVC_Senegal NTFS Partition 10 GB Healthy
Volume 2 D DVD-ROM 0 B Healthy

DISKPART> select volume 1


Volume 1 is the selected volume.

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---

206 Implementing the IBM System Storage SAN Volume Controller V7.4
* Disk 1 Online 11 GB 1020 MB

Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No

DISKPART> extend

DiskPart successfully extended the volume.

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---
* Disk 1 Online 11 GB 0 B

Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No

After the volume is extended, the detail volume command shows no free capacity on the
volume anymore. The list volume command shows the file system size. The Disk
Management window also shows the new disk size, as shown in Figure 5-26.

Figure 5-26 Disk Management after extending the disk

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SVC volume. The new space appears as deallocated space at
the end of the disk.

Chapter 5. Host configuration 207


In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk
Management functions to allocate the new space. Expansion works irrespective of the volume
type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded
without stopping I/O, in most cases.

Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data. This operation is disruptive for the data because of a change in the
position of the logical block address (LBA) on the disks.

5.5.9 Removing a disk on Windows


To remove a disk from Windows, when the disk is an SVC volume, follow the standard
Windows procedure to ensure that no data that we want to preserve is on the disk, that no
applications are using the disk, and that no I/O is going to the disk. After you complete this
procedure, remove the host mapping on the SVC. Ensure that you are removing the correct
volume. To confirm, use SDD to locate the serial number of the disk. On the SVC, run the
lshostvdiskmap command to find the volume’s name and number. Also, check that the SDD
serial number on the host matches the unique identifier (UID) on the SVC for the volume.

When the host mapping is removed, perform a rescan for the disk. Disk Management on the
server removes the disk, and the vpath goes into the status of CLOSE on the server. Verify
these actions by running the datapath query device SDD command, but the vpath that is
closed is first removed after a reboot of the server.

In the following examples, we show how to remove an SVC volume from a Windows server.
We show this example on a Windows Server 2008 operating system, but the steps also apply
to Windows Server 2008 R2 and Windows Server 2012.

Figure 5-24 on page 204 shows the Disk Management before removing the disk.

We now remove Disk 1. To find the correct volume information, we find the Serial/UID number
by using SDD, as shown in Example 5-20.

Example 5-20 Removing the SVC disk from the Windows server
C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0

208 Implementing the IBM System Storage SAN Volume Controller V7.4
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000011
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69
0

Knowing the Serial/UID of the volume and that the host name is Senegal, we identify the host
mapping to remove by running the lshostvdiskmap command on the SVC. Then, we remove
the actual host mapping, as shown in Example 5-21.

Example 5-21 Finding and removing the host mapping


IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0
6005076801A180E9080000000000000F
1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0
6005076801A180E90800000000000010
1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0
6005076801A180E90800000000000011

IBM_2145:ITSO_SVC1:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001

IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Senegal


id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0
6005076801A180E90800000000000010
1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0
6005076801A180E90800000000000011

Chapter 5. Host configuration 209


Here, we can see that the volume is removed from the server. On the server, we then perform
a disk rescan in Disk Management, and we now see that the correct disk (Disk1) was
removed, as shown in Figure 5-27.

Figure 5-27 Disk Management: Disk is removed

SDDDSM also shows us that the status for all paths to Disk1 changed to CLOSE because the
disk is not available, as shown in Example 5-22 on page 211.

210 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 5-22 SDD: Closed path
C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000011
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0

The disk (Disk1) is now removed from the server. However, to remove the SDDDSM
information about the disk, you must reboot the server at a convenient time.

5.6 Using SAN Volume Controller CLI from a Windows host


To run CLI commands, we must install and prepare the SSH client system on the Windows
host system.

We can install the PuTTY SSH client software on a Windows host by using the PuTTY
installation program. You can download PuTTY from this website:
http://www.chiark.greenend.org.uk/~sgtatham/putty/

SSH client alternatives for Windows are available at this website:


http://www.openssh.com/windows.html

Cygwin software features an option to install an OpenSSH client. You can download Cygwin
from this website:
http://www.cygwin.com/

Chapter 5. Host configuration 211


5.7 Microsoft Volume Shadow Copy
The SVC provides support for Microsoft Volume Shadow Copy Service (VSS). Microsoft VSS
can provide a point-in-time (shadow) copy of a Windows host volume while the volume is
mounted and the files are in use.

In this section, we describe how to install VSS. The following operating system versions are
supported:
򐂰 Windows Server 2003 with Service Pack (SP) 2 (x86 and x86_64)
򐂰 Windows Server 2008 with SP2 (x86 and x86_64)
򐂰 Windows Server 2008 R2 with SP1
򐂰 Windows Server 2012

The following components are used to support the service:


򐂰 The SVC
򐂰 IBM System Storage hardware provider, which is known as the IBM System Storage
Support for Microsoft VSS
򐂰 Microsoft Volume Shadow Copy Service

IBM System Storage Support for Microsoft VSS (IBM VSS) is installed on the Windows host.

To provide the point-in-time shadow copy, complete the following process:


1. A backup application on the Windows host starts a snapshot backup.
2. VSS notifies IBM VSS that a copy is needed.
3. The SVC prepares the volume for a snapshot.
4. VSS quiesces the software applications that are writing data on the host and flushes file
system buffers to prepare for a copy.
5. The SVC creates the shadow copy by using the FlashCopy service.
6. VSS notifies the writing applications that I/O operations can resume and notifies the
backup application that the backup was successful.

VSS maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of
volumes. These pools are implemented as virtual host systems on the SVC.

5.7.1 Installation overview


The steps for implementing IBM VSS must be completed in the correct sequence. Before you
begin, you must have experience with, or knowledge of, administering a Windows operating
system. You also must have experience with, or knowledge of, administering an SVC.

You must complete the following tasks:


򐂰 Verify that the system requirements are met.
򐂰 Install IBM VSS.
򐂰 Verify the installation.
򐂰 Create a free pool of volumes and a reserved pool of volumes on the SVC.

212 Implementing the IBM System Storage SAN Volume Controller V7.4
5.7.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install IBM VSS and
Virtual Disk Service software on the Windows operating system:
򐂰 SVC with FlashCopy enabled
򐂰 IBM System Storage Support for Microsoft VSS and Virtual Disk Service (VDS) software

5.7.3 Installing the IBM System Storage hardware provider


This section describes the steps to install the IBM System Storage hardware provider on a
Windows server. You must satisfy all of the system requirements before you start the
installation.

During the installation, you are prompted to enter information about the SVC Master Console,
including the location of the truststore file. The truststore file is generated during the
installation of the Master Console. You must copy this file to a location that is accessible to the
IBM System Storage hardware provider on the Windows server.

When the installation is complete, the installation program might prompt you to restart the
system. Complete the following steps to install the IBM System Storage hardware provider on
the Windows server:
1. Download the installation archive from the following IBM website and extract it to a
directory on the Windows server where you want to install IBM System Storage Support
for VSS:
http://ibm.com/support/docview.wss?uid=ssg1S4000833
2. Log in to the Windows server as an administrator and browse to the directory where the
installation files were downloaded.
3. Run the installation program by double-clicking IBMVSSVDS.exe.
4. The Welcome window opens, as shown in Figure 5-28. Click Next to continue with the
installation.

Figure 5-28 IBM System Storage Support for VSS and VDS installation: Welcome

Chapter 5. Host configuration 213


5. Accept the license agreement in the next window. The Choose Destination Location
window opens, as shown in Figure 5-29. Click Next to accept the default directory where
the setup program installs the files, or click Change to select another directory.

Figure 5-29 Choose Destination Location

6. Click Install to begin the installation, as shown in Figure 5-30.

Figure 5-30 IBM System Storage Support for VSS and VDS installation

7. The next window prompts you to select a Common Information Module (CIM) server,
which is the SVC. In contrast with the older SVC versions, the configuration node now
provides the CIM service on the cluster IP address. Select the correct, automatically
discovered CIM server, or select Enter CIM Server address manually, and then click
Next, as shown in Figure 5-31 on page 215.

214 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-31 Select CIM Server

8. The Enter CIM Server Details window opens. Enter the following information in the fields
(Figure 5-32):
a. The CIM Server Address field is propagated with the URL according to the CIM server
address that was chosen in the previous step.
b. In the CIM User field, enter the user name that the IBM VSS software used to access
to the SVC.
c. In the CIM Password field, enter the password for the SVC user name that was
provided in the previous step. Click Next.

Figure 5-32 Enter CIM Server Details

9. In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to
restart the system, as shown in Figure 5-33 on page 216.

Chapter 5. Host configuration 215


Figure 5-33 Installation complete

Additional information: If these settings change after installation, you can use the
ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services
software with the new settings.

If you do not have the CIM Agent server, port, or user information, contact your CIM Agent
administrator.

5.7.4 Verifying the installation


Complete the following steps to verify the installation:
1. From the Windows server start menu, select Start → All Programs → Administrative
Tools → Services.
2. Ensure that the service that is named “IBM System Storage Support for Microsoft Volume
Shadow Copy Service and Virtual Disk Service” software appears and that the Status is
set to Started and that the Startup Type is set to Automatic.
3. Open a command prompt window and run the following command:
vssadmin list providers
4. This command ensures that the service that is named IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Virtual Disk Service software is listed as a
provider, as shown in Example 5-23.

Example 5-23 Microsoft Software Shadow copy provider


C:\Users\Administrator>vssadmin list providers
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2005 Microsoft Corp.

Provider name: 'Microsoft Software Shadow Copy provider 1.0'


Provider type: System
Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5}
Version: 1.0.0.7

Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware
Provider'

216 Implementing the IBM System Storage SAN Volume Controller V7.4
Provider type: Hardware
Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b}
Version: 4.2.1.0816

If you can successfully perform all of these verification tasks, the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was
successfully installed on the Windows server.

5.7.5 Creating free and reserved pools of volumes


The IBM System Storage hardware provider maintains a free pool of volumes and a reserved
pool of volumes. Because these objects do not exist on the SVC, the free pool of volumes and
the reserved pool of volumes are implemented as virtual host systems. You must define these
two virtual host systems on the SVC.

When a shadow copy is created, the IBM System Storage hardware provider selects a
volume in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This process protects the volume from being overwritten by other Volume Shadow Copy
Service users.

To successfully perform a Volume Shadow Copy Service operation, enough volumes must be
available that are mapped to the free pool. The volumes must be the same size as the source
volumes.

Use the SVC GUI or SVC CLI to complete the following steps:
1. Create a host for the free pool of volumes. You can use the default name VSS_FREE or
specify another name. Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeros), as shown in Example 5-24.

Example 5-24 Creating a mkhost for the free pool


IBM_2145:ITSO_SVC1:admin>svctask mkhost -name VSS_FREE -hbawwpn
5000000000000000 -force Host, id [2], successfully created

2. Create a virtual host for the reserved pool of volumes. You can use the default name
VSS_RESERVED or specify another name. Associate the host with the WWPN
5000000000000001 (14 zeros), as shown in Example 5-25.

Example 5-25 Creating a mkhost for the reserved pool


IBM_2145:ITSO_SVC1:admin>svctask mkhost -name VSS_RESERVED -hbawwpn
5000000000000001 -force
Host, id [3], successfully created

3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be
mapped to any other hosts. If you have volumes that are created for the free pool of
volumes, you must assign the volumes to the free pool.

Chapter 5. Host configuration 217


4. Create host mappings between the volumes that were selected in step 3 and the
VSS_FREE host to add the volumes to the free pool. Alternatively, you can run the
ibmvcfg add command to add volumes to the free pool, as shown in Example 5-26.

Example 5-26 Host mappings


IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002
Virtual Disk to Host map, id [1], successfully created

5. Verify that the volumes were mapped. If you do not use the default WWPNs
5000000000000000 and 5000000000000001, you must configure the IBM System
Storage hardware provider with the WWPNs, as shown in Example 5-27.

Example 5-27 Verify hosts


IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap VSS_FREE
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E90800000000000012
2 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013

5.7.6 Changing the configuration parameters


You can change the parameters that you defined when you installed the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software.
Therefore, you must use the ibmvcfg.exe utility. This command-line utility is in the C:\Program
Files\IBM\Hardware Provider for VSS-VDS directory, as shown in Example 5-28.

Example 5-28 Using ibmvcfg.exe utility help


C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe
IBM System Storage VSS Provider Configuration Tool Commands
----------------------------------------
ibmvcfg.exe <command> <command arguments>
Commands:
/h | /help | -? | /?
showcfg
listvols <all|free|unassigned>
add <volume esrial number list> (separated by spaces)
rem <volume serial number list> (separated by spaces)

Configuration:
set user <CIMOM user name>
set password <CIMOM password>
set trace [0-7]
set trustpassword <trustpassword>
set truststore <truststore location>
set usingSSL <YES | NO>
set vssFreeInitiator <WWPN>
set vssReservedInitiator <WWPN>
set FlashCopyVer <1 | 2> (only applies to ESS)
set cimomPort <PORTNUM>
set cimomHost <Hostname>
set namespace <Namespace>
set targetSVC <svc_cluster_ip>
set backgroundCopy <0-100>

218 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 5-4 lists the available commands.

Table 5-4 Available ibmvcfg.exe utility commands


Command Description Example

ibmvcfg showcfg This command lists the current ibmvcfg showcfg


settings.

ibmvcfg set username This command sets the user ibmvcfg set username Dan
<username> name to access the SVC
Console.

ibmvcfg set password This command sets the ibmvcfg set password
<password> password of the user name that mypassword
accesses the SVC Console.

ibmvcfg set targetSVC This command specifies the IP set targetSVC 9.43.86.120
<ipaddress> address of the SVC on which
the volumes are located when
volumes are moved to and from
the free pool with the ibmvcfg
add and ibmvcfg rem
commands. The IP address is
overridden if you use the -s flag
with the ibmvcfg add and
ibmvcfg rem commands.

set backgroundCopy This command sets the set backgroundCopy 80


background copy rate for
FlashCopy.

ibmvcfg set usingSSL This command specifies ibmvcfg set usingSSL yes
whether to use the Secure
Sockets Layer (SSL) protocol to
connect to the SVC Console.

ibmvcfg set cimomPort This command specifies the ibmvcfg set cimomPort 5999
<portnum> SVC Console port number. The
default value is 5999.

ibmvcfg set cimomHost This command sets the name of ibmvcfg set cimomHost
<server name> the server where the SVC cimomserver
Console is installed.

ibmvcfg set namespace This command specifies the ibmvcfg set namespace
<namespace> namespace value that the \root\ibm
Master Console uses. The
default value is \root\ibm.

ibmvcfg set vssFreeInitiator This command specifies the ibmvcfg set vssFreeInitiator
<WWPN> WWPN of the host. The default 5000000000000000
value is 5000000000000000.
Modify this value only if a host
exists in your environment with
a WWPN of
5000000000000000.

Chapter 5. Host configuration 219


Command Description Example

ibmvcfg set This command specifies the ibmvcfg set


vssReservedInitiator <WWPN> WWPN of the host. The default vssReservedInitiator
value is 5000000000000001. 5000000000000001
Modify this value only if a host is
already in your environment
with a WWPN of
5000000000000001.

ibmvcfg listvols This command lists all of the ibmvcfg listvols


volumes, including information
about the size, location, and
host mappings.

ibmvcfg listvols all This command lists all of the ibmvcfg listvols all
volumes, including information
about the size, location, and
host mappings.

ibmvcfg listvols free This command lists the ibmvcfg listvols free
volumes that are in the free
pool.

ibmvcfg listvols unassigned This command lists the ibmvcfg listvols unassigned
volumes that are currently not
mapped to any hosts.

ibmvcfg add -s ipaddress This command adds one or ibmvcfg add vdisk12 ibmvcfg
more volumes to the free pool add 600507 68018700035000000
of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.

ibmvcfg rem -s ipaddress This command removes one or ibmvcfg rem vdisk12 ibmvcfg
more volumes from the free rem 600507 68018700035000000
pool of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.

5.8 Specific Linux (on x86/x86_64) information


The following sections describe specific information that relates to the connection of Linux on
Intel hosts to the SVC environment.

220 Implementing the IBM System Storage SAN Volume Controller V7.4
5.8.1 Configuring the Linux host
Complete the following steps to configure the Linux host:
1. Use the latest firmware levels on your host system.
2. Install the HBA or HBAs on the Linux server, as described in 5.5.4, “Installing and
configuring the host adapter” on page 193.
3. Install the supported HBA driver or firmware and upgrade the kernel, if required.
4. Connect the Linux server FC host adapters to the switches.
5. Configure the switches (zoning), if needed.
6. Install SDD for Linux, as described in 5.8.5, “Multipathing in Linux” on page 222.
7. Configure the host, volumes, and host mapping in the SVC.
8. Rescan for LUNs on the Linux server to discover the volumes that were created on the
SVC.

5.8.2 Configuration information


The SVC supports hosts that run the following Linux distributions:
򐂰 Red Hat Enterprise Linux
򐂰 SUSE Linux Enterprise Server
򐂰 Debian x86_32 Server

For the latest information, see this website:


http://www.ibm.com/storage/support/2145

This website provides the hardware list for supported HBAs and device driver levels for Linux.
Check the supported firmware and driver level for your HBA, and follow the manufacturer’s
instructions to upgrade the firmware and driver levels for each type of HBA.

5.8.3 Disabling automatic Linux system updates


Many Linux distributions give you the ability to configure your systems for automatic system
updates. Red Hat provides this ability in the form of a program that is called up2date. Novell
SUSE provides the YaST Online Update utility. These features periodically query for updates
that are available for each host. You can configure them to automatically install any new
updates that they find.

Often, the automatic update process also upgrades the system to the latest kernel level. Old
hosts that are still running SDD must turn off the automatic update of kernel levels because
certain drivers that are supplied by IBM, such as SDD, depend on a specific kernel and cease
to function on a new kernel. Similarly, HBA drivers must be compiled against specific kernels
to function optimally. By allowing automatic updates of the kernel, you risk affecting your host
systems unexpectedly.

5.8.4 Setting queue depth with QLogic HBAs


The queue depth is the number of I/O operations that can be run in parallel on a device.

Chapter 5. Host configuration 221


Complete the following steps to set the maximum queue depth:
1. Add the following line to the /etc/modules.conf file for the 2.6 kernel (SUSE Linux
Enterprise Server 9, or later, or Red Hat Enterprise Linux 4, or later):
options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth
2. Rebuild the RAM disk, which is associated with the kernel that is being used, by using one
of the following commands:
– If you are running on a SUSE Linux Enterprise Server operating system, run the
mk_initrd command.
– If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd
command, and then restart.

5.8.5 Multipathing in Linux


Red Hat Enterprise Linux 5, and later, and SUSE Linux Enterprise Server 10, and later,
provide their own multipath support by the operating system. On older systems, it is
necessary to install the IBM SDD multipath driver. Installation and configuration instructions
for SDD are not provided because it is not to be deployed on newly installed Linux hosts.

Device Mapper Multipath (DM-MPIO)


Red Hat Enterprise Linux 5 (RHEL5), and later, and SUSE Linux Enterprise Server 10
(SLES10), and later, provide their own multipath support for the operating system. Therefore,
you do not have to install another device driver. Always check whether your operating system
includes one of the supported multipath drivers. This information is available by using the
links that are provided in 5.8.2, “Configuration information” on page 221.

In SLES10, the multipath drivers and tools are installed, by default. However, for RHEL5, the
user must explicitly choose the multipath components during the operating system installation
to install them. Each of the attached SVC LUNs has a special device file in the Linux /dev
directory.

Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC
allows. The following website provides the current information about the maximum
configuration for the SVC:
http://www.ibm.com/storage/support/2145

Creating and preparing DM-MPIO volumes for use


First, you must start the MPIO daemon on your system. Run the following SLES command or
the following RHEL command on your host system:
򐂰 Enable MPIO for SLES by running the following commands:
/etc/init.d/boot.multipath {start|stop}
/etc/init.d/multipathd
{start|stop|status|try-restart|restart|force-reload|reload|probe}

Tip: Run insserv boot.multipath multipathd to automatically load the multipath


driver and multipathd daemon during the start.

222 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Enable MPIO for RHEL by running the following commands:
modprobe dm-multipath
modprobe dm-round-robin
service multipathd start
chkconfig multipathd on
Example 5-29 shows the commands that are run on a Red Hat Enterprise Linux 6.3
operating system.

Example 5-29 Starting MPIO daemon on Red Hat Enterprise Linux


[root@palau ~]# modprobe dm-round-robin
[root@palau ~]# multipathd start
[root@palau ~]# chkconfig multipathd on
[root@palau ~]#

Complete the following steps to enable multipathing for IBM devices:


1. Open the multipath.conf file and follow the instructions. The file is in the /etc directory.
Example 5-30 shows editing by using vi.

Example 5-30 Editing the multipath.conf file


[root@palau etc]# vi multipath.conf

2. Add the following entry to the multipath.conf file:


device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_alua /dev/%n"
}

Note: You can download example multipath.conf files from the following IBM
Subsystem Device Driver for Linux website:
http://ibm.com/support/docview.wss?uid=ssg1S4000107#DM

3. Restart the multipath daemon, as shown in Example 5-31.

Example 5-31 Stopping and starting the multipath daemon


[root@palau ~]# service multipathd stop
Stopping multipathd daemon: [ OK ]
[root@palau ~]# service multipathd start
Starting multipathd daemon: [ OK ]

4. Run the multipath -dl command to see the MPIO configuration. You see two groups with
two paths each. All paths must have the state [active][ready], and one group shows
[enabled].

Chapter 5. Host configuration 223


5. Run the fdisk command to create a partition on the SVC, as shown in Example 5-32.

Example 5-32 fdisk command


[root@palau scsi]# fdisk -l

Disk /dev/hda: 80.0 GB, 80032038912 bytes


255 heads, 63 sectors/track, 9730 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/hda1 * 1 13 104391 83 Linux
/dev/hda2 14 9730 78051802+ 8e Linux LVM

Disk /dev/sda: 4244 MB, 4244635648 bytes


131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 4244 MB, 4244635648 bytes


131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 4244 MB, 4244635648 bytes


131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 4244 MB, 4244635648 bytes


131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 4244 MB, 4244635648 bytes


131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/sdf: 4244 MB, 4244635648 bytes


131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdf doesn't contain a valid partition table

Disk /dev/sdg: 4244 MB, 4244635648 bytes


131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdg doesn't contain a valid partition table

224 Implementing the IBM System Storage SAN Volume Controller V7.4
Disk /dev/sdh: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdh doesn't contain a valid partition table

Disk /dev/dm-2: 4244 MB, 4244635648 bytes


255 heads, 63 sectors/track, 516 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-2 doesn't contain a valid partition table

Disk /dev/dm-3: 4244 MB, 4244635648 bytes


255 heads, 63 sectors/track, 516 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-3 doesn't contain a valid partition table


[root@palau scsi]# fdisk /dev/dm-2
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n


Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 1
First cylinder (1-516, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-516, default 516):
Using default value 516

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
[root@palau scsi]# shutdown -r now

6. Create a file system by running the mkfs command, as shown in Example 5-33.

Example 5-33 mkfs command


[root@palau ~]# mkfs -t ext3 /dev/dm-2
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)

Chapter 5. Host configuration 225


Fragment size=4096 (log=2)
518144 inodes, 1036288 blocks
51814 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1061158912
32 block groups
32768 blocks per group, 32768 fragments per group
16192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done


Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or


180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@palau ~]#

7. Create a mount point and mount the drive, as shown in Example 5-34.

Example 5-34 Mount point


[root@palau ~]# mkdir /svcdisk_0
[root@palau ~]# cd /svcdisk_0/
[root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0
[root@palau svcdisk_0]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
73608360 1970000 67838912 3% /
/dev/hda1 101086 15082 80785 16% /boot
tmpfs 967984 0 967984 0% /dev/shm
/dev/dm-2 4080064 73696 3799112 2% /svcdisk_0

5.9 VMware configuration information


This section describes the requirements and other information for attaching the SVC to
various guest host operating systems that are running on the VMware operating system.

5.9.1 Configuring VMware hosts


To configure the VMware hosts, complete the following steps:
1. Install the HBAs in your host system, as described in 5.9.3, “HBAs for hosts that are
running VMware” on page 227.
2. Connect the server FC host adapters to the switches.
3. Configure the switches (zoning), as described in 5.9.4, “VMware storage and zoning
guidance” on page 227.
4. Install the VMware operating system (if not already installed) and check the HBA timeouts,
as described in 5.9.5, “Setting the HBA timeout for failover in VMware” on page 228.

226 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Configure the host, volumes, and host mapping in the SVC, as described in 5.9.7,
“Attaching VMware to volumes” on page 229.

5.9.2 Operating system versions and maintenance levels


For more information about VMware support, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html

At the time of this writing, the following versions are supported:


򐂰 ESX V5.x
򐂰 ESX V4.x

5.9.3 HBAs for hosts that are running VMware


Ensure that your hosts that are running on VMware operating systems use the correct HBAs
and firmware levels. Install the host adapters in your system. See the manufacturer’s
instructions for the installation and configuration of the HBAs.

For more information about supported HBAs for older ESX versions, see this website:
http://ibm.com/storage/support/2145

Mostly, the supported HBA device drivers are included in the ESX server build. However, for
various newer storage adapters, you might be required to load more ESX drivers. Check the
following VMware hardware compatibility list (HCL) if you must load a custom driver for your
adapter:
http://www.vmware.com/resources/compatibility/search.php

After the HBAs are installed, load the default configuration of your FC HBAs. You must use
the same model of HBA with the same firmware in one server. Configuring Emulex and
QLogic HBAs to access the same target in one server is not supported.

SAN boot support


The SAN boot of any guest operating system is supported under VMware. The nature of
VMware means that SAN boot is a requirement on any guest operating system. The guest
operating system must be on a SAN disk and SAN-attached to the SVC. iSCSI SAN boot is
not supported by VMware at the time of writing.

If you are unfamiliar with the VMware environment and the advantages of storing virtual
machines and application data on a SAN, it is useful to get an overview about VMware
products before you continue.

VMware documentation is available at this website:


http://www.vmware.com/support/pubs/

5.9.4 VMware storage and zoning guidance


The VMware ESX server can use a Virtual Machine File System (VMFS). VMFS is a file
system that is optimized to run multiple virtual machines as one workload to minimize disk
I/O. It also can handle concurrent access from multiple physical machines because it enforces
the appropriate access controls. Therefore, multiple ESX hosts can share the set of LUNs.

Chapter 5. Host configuration 227


Theoretically, you can run all of your virtual machines on one LUN. However, for performance
reasons in more complex scenarios, it can be better to load balance virtual machines over
separate HBAs, storages, or arrays.

If you run an ESX host, for example, with several virtual machines, it makes sense to use one
“slow” array. For example, you can use one slow array for Print and Active Directory Services
guest operating systems without high I/O, and another fast array for database guest operating
systems.

The use of fewer volumes has the following advantages:


򐂰 More flexibility to create virtual machines without creating space on the SVC
򐂰 More possibilities for taking VMware snapshots
򐂰 Fewer volumes to manage

The use of more and smaller volumes has the following advantages:
򐂰 Separate I/O characteristics of the guest operating systems
򐂰 More flexibility (the multipathing policy and disk shares are set per volume)
򐂰 Microsoft Cluster Service requires its own volume for each cluster disk resource

For more information about designing your VMware infrastructure, see the following websites:
򐂰 http://www.vmware.com/vmtn/resources/
򐂰 http://www.vmware.com/resources/techresources/1059

Guidelines: ESX server hosts that use shared storage for virtual machine failover or load
balancing must be in the same zone. You can have only one VMFS volume per volume.

5.9.5 Setting the HBA timeout for failover in VMware


The timeout for failover for ESX hosts must be set to 30 seconds. Consider the following
points:
򐂰 For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The
timeout value is shown in the following example:
2 x PortDownRetryCount + 5 seconds
Set the qlport_down_retry parameter to 14.
򐂰 For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be
set to 30 seconds.

To make these changes on your system (Example 5-35), complete the following steps:
1. Back up the /etc/vmware/esx.cof file.
2. Open the /etc/vmware/esx.cof file for editing.
The file includes a section for every installed SCSI device.
3. Locate your SCSI adapters and edit the previously described parameters.
4. Repeat this process for every installed HBA.

Example 5-35 Setting the HBA timeout


[root@nile svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup
[root@nile svc]# vi /etc/vmware/esx.conf

228 Implementing the IBM System Storage SAN Volume Controller V7.4
5.9.6 Multipathing in ESX
The VMware ESX server performs native multipathing. You do not need to install another
multipathing driver, such as SDD.

5.9.7 Attaching VMware to volumes


First, ensure that the VMware host is logged in to the SVC. In our examples, we use the
VMware ESX server V3.5 and the host name of Nile.

Enter the following command to check the status of the host:


svcinfo lshost <hostname>

Example 5-36 shows that the host Nile is logged in to the SVC with two HBAs.

Example 5-36 The lshost Nile


IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 2
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active

Then, the SCSI Controller Type must be set in VMware. By default, the ESX server disables
the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS
file at the same time. See Figure 5-34 on page 230.

But, in many configurations, such as configurations for high availability, the virtual machines
must share the VMFS file to share a disk.

Complete the following steps to set the SCSI Controller Type in VMware:
1. Log in to your Infrastructure Client, shut down the virtual machine, right-click it, and select
Edit settings.
2. Highlight the SCSI Controller, and select one of the following available settings, depending
on your configuration:
– None: Disks cannot be shared by other virtual machines.
– Virtual: Disks can be shared by virtual machines on the same server.
– Physical: Disks can be shared by virtual machines on any server.
Click OK to apply the setting.

Chapter 5. Host configuration 229


Figure 5-34 Changing SCSI bus settings

3. Create your volumes on the SVC. Then, map them to the ESX hosts.

Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file
must be visible to every ESX host that can host the virtual machine.

In the SVC, select Allow the virtual disks to be mapped even if they are already
mapped to a host.

The volume must have the same SCSI ID on each ESX host.

For this configuration, we created one volume and mapped it to our ESX host, as shown in
Example 5-37.

Example 5-37 Mapped volume to ESX host Nile


IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Nile 0 12 VMW_pool 210000E08B892BCD
60050768018301BF2800000000000010

ESX does not automatically scan for SAN changes (except when rebooting the entire ESX
server). If you made any changes to your SVC or SAN configuration, complete the following
steps:
1. Open your VMware Infrastructure Client.
2. Select the host.
3. In the Hardware window, choose Storage Adapters.
4. Click Rescan.

To configure a storage device to use it in VMware, complete the following steps:


1. Open your VMware Infrastructure Client.
2. Select the host for which you want to see the assigned volumes and click the
Configuration tab.
3. In the Hardware window on the left side, click Storage.
4. To create a storage pool, select click here to create a datastore or click Add storage if
the field does not appear (Figure 5-35 on page 231).

230 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 5-35 VMware add datastore

5. The Add storage wizard opens.


6. Select Create Disk/Lun, and then click Next.
7. Select the SVC volume that you want to use for the datastore, and then click Next.
8. Review the disk layout. Click Next.
9. Enter a datastore name. Click Next.
10.Select a block size and enter the size of the new partition. Click Next.
11.Review your selections. Click Finish.

Now, the created VMFS datastore appears in the Storage window, as shown in Figure 5-36.
You see the details for the highlighted datastore. Check whether all of the paths are available
and that the Path Selection is set to Round Robin.

Figure 5-36 VMware storage configuration

If not all of the paths are available, check your SAN and storage configuration. After the
problem is fixed, select Refresh to perform a path rescan. The view is updated to the new
configuration.

The preferred practice is to use the Round Robin Multipath Policy for the SVC. If you need to
edit this policy, complete the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change.

Chapter 5. Host configuration 231


5. Select Round Robin.
6. Click OK.
7. Click Close.

Now, your VMFS datastore is created and you can start using it for your guest operating
systems. Round Robin distributes the I/O load across all available paths. If you want to use a
fixed path, the policy setting Fixed also is supported.

5.9.8 Volume naming in VMware


In the Virtual Infrastructure Client, a volume is displayed as a sequence of three or four
numbers, which are separated by colons (Figure 5-37) and are shown under the Device and
SAN Identifier columns, as shown in the following example:

<SCSI HBA>:<SCSI target>:<SCSI volume>:<disk partition>

The following definitions apply to the previous variables:


򐂰 SCSI HBA: The number of the SCSI HBA (can change).
򐂰 SCSI target: The number of the SCSI target (can change).
򐂰 SCSI volume: The number of the volume (never changes).
򐂰 disk partition: The number of the disk partition. (This value never changes.) If the last
number is not displayed, the name stands for the entire volume.

Figure 5-37 Volume naming in VMware

5.9.9 Setting the Microsoft guest operating system timeout


For a Windows 2003 Server or Windows Server 2008 operating system that is installed as a
VMware guest operating system, the disk timeout value must be set to 60 seconds.

For more information about performing this task, see 5.5.5, “Changing the disk timeout on
Windows Server” on page 193.

5.9.10 Extending a VMFS volume


VMFS volumes can be extended while virtual machines are running. First, you must extend
the volume on the SVC, and then you can extend the VMFS volume.

Note: Before you perform the steps that are described here, back up your data.

232 Implementing the IBM System Storage SAN Volume Controller V7.4
Complete the following steps to extend a volume:
1. Expand the volume by running the svctask expandvdisksize -size 1 -unit gb
<VDiskname> command, as shown in Example 5-38.

Example 5-38 Expanding a volume on the SVC


IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 60.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000010
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 60.00GB
real_capacity 60.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0

Chapter 5. Host configuration 233


status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 65.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000010
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 65.00GB
real_capacity 65.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>

2. Open the Virtual Infrastructure Client.


3. Select the host.
4. Select Configuration.
5. Select Storage Adapters.
6. Click Rescan.
7. Ensure that the Scan for new Storage Devices option is selected, and then click OK.
After the scan completes, the new capacity is displayed in the Details section.
8. Click Storage.
9. Right-click the VMFS volume and click Properties.
10.Click Add Extent.

234 Implementing the IBM System Storage SAN Volume Controller V7.4
11.Select the new free space, and then click Next.
12.Click Next.
13.Click Finish.

The VMFS volume is now extended and the new space is ready for use.

5.9.11 Removing a datastore from an ESX host


Before you remove a datastore from an ESX host, you must migrate or delete all of the virtual
machines that are on this datastore.

To remove the datastore, complete the following steps:


1. Back up the data.
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage.
6. Highlight the datastore that you want to remove.
7. Click Remove.
8. Read the warning, and if you are sure that you want to remove the datastore and delete all
of the data on it, click Yes.
9. Remove the host mapping on the SVC, or delete the volume, as shown in Example 5-39.

Example 5-39 Host mapping: Delete the volume


IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_pool
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool

10.In the VI Client, select Storage Adapters.


11.Click Rescan.
12.Ensure that the Scan for new Storage Devices option is selected and click OK.
13.After the scan completes, the disk is removed from the view.

Your datastore is now removed successfully from the system.

For more information about supported software and driver levels, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html

5.10 Sun Solaris hosts


At the time of writing, SUN Solaris hosts (SunOS) versions 8, 9, 10, and 11 are supported by
the SVC; however, SunOS 5.8 (Solaris 8) is discontinued.

Chapter 5. Host configuration 235


5.10.1 SDD dynamic pathing
Solaris supports dynamic pathing when you add more paths to an existing volume or present
a new volume to a host. No user intervention is required. SDD is aware of the preferred paths
that the SVC sets per volume.

SDD uses a round-robin algorithm when failing over paths. That is, SDD tries the next known
preferred path. If this method fails and all preferred paths are tried, it uses a round-robin
algorithm on the non-preferred paths until it finds a path that is available. If all paths are
unavailable, the volume goes offline. Therefore, it can take time to perform path failover when
multiple paths go offline.

SDD under Solaris performs load balancing across the preferred paths, where appropriate.

Veritas Volume Manager with dynamic multipathing


Veritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the
next available I/O path for I/O requests without action from the administrator. VM with DMP is
also informed when you repair or restore a connection, and when you add or remove devices
after the system is fully booted (if the operating system recognizes the devices correctly). The
new Java Native Interface (JNI) drivers support the host mapping of new volumes without
rebooting the Solaris host.

Note the following support characteristics:


򐂰 Veritas VM with DMP supports load balancing across multiple paths with the SVC.
򐂰 Veritas VM with DMP does not support preferred pathing with the SVC.

Coexistence with SDD and Veritas VM with DMP


Veritas Volume Manager with DMP coexists in “pass-through” mode with SDD. DMP uses the
vpath devices that are provided by SDD.

OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA, and SFRAC V4.1/5.0, and Solaris with
Sun Cluster V3.1/3.2 are supported at the time of this writing.

SAN boot support


Note the following support characteristics:
򐂰 Boot from SAN is supported under Solaris 9 running Symantec Volume Manager.
򐂰 Boot from SAN is not supported when SDD is used as the multipathing software.

5.11 Hewlett-Packard UNIX configuration information


For more information about Hewlett-Packard UNIX (HP-UX) support, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html

236 Implementing the IBM System Storage SAN Volume Controller V7.4
5.11.1 Operating system versions and maintenance levels
At the time of this writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).

5.11.2 Supported multipath solutions


At the time of this writing, SDD V1.7.2.0 for HP-UX is supported. Multipathing Software PV
Link and Cluster Software Service Guard V11.14/11.16/11.17/11.18 are also supported.
However, in a cluster environment, we suggest that you use SDD.

SDD dynamic pathing


HP-UX supports dynamic pathing when you add more paths to an existing volume or present
a new volume to a host.

SDD is aware of the preferred paths that the SVC sets per volume. SDD uses a round-robin
algorithm when it fails over paths. That is, it tries the next known preferred path. If this method
fails and all preferred paths were tried, it uses a round-robin algorithm on the non-preferred
paths until it finds a path that is available. If all paths are unavailable, the volume goes offline.
Therefore, it can take time to perform path failover when multiple paths go offline.

SDD under HP-UX performs load balancing across the preferred paths where appropriate.

Physical volume links dynamic pathing


Unlike SDD, physical volume links (PVLinks) do not load balance, and PVLinks is unaware of
the preferred paths that the SVC sets per volume. Therefore, we strongly suggest that you
use SDD, except when in a clustering environment or when an SVC volume is used as your
boot disk.

When you are creating a VG, specify the primary path that you want HP-UX to use when it is
accessing the Physical Volume (PV) that is presented by the SVC. Only this path is used to
access the PV if it is available, no matter what the SVC’s preferred path is to that volume.
Therefore, be careful when you are creating VGs so that the primary links to the PVs (and
load) are balanced over both HBAs, FC switches, SVC nodes, and so on.

When you are extending a VG to add alternative paths to the PVs, the order in which you add
these paths is HP-UX’s order of preference if the primary path becomes unavailable.
Therefore, when you are extending a VG, the first alternative path that you add must be from
the same SVC node as the primary path to avoid unnecessary node failover because of an
HBA, FC link, or FC switch failure.

5.11.3 Coexistence of SDD and PVLinks


If you want to multipath a volume with PVLinks while SDD is installed, you must ensure that
SDD does not configure a vpath for that volume. To ensure that SDD does not configure a
vpath for that volume, you must put the serial number of any volumes that you want SDD to
ignore in the /etc/vpathmanualexcl.cfg directory. In a SAN boot, if you are booting from an
SVC volume, SDD automatically ignores the boot volume when you install SDD (from version
1.6 onward).

SAN boot support


SAN boot is supported on HP-UX by using PVLinks as the multipathing software on the boot
device. You can use PVLinks or SDD to provide the multipathing support for the other devices
that are attached to the system.

Chapter 5. Host configuration 237


5.11.4 Using an IBM SAN Volume Controller volume as a cluster lock disk
ServiceGuard does not provide a way to specify alternative links to a cluster lock disk. When
you are using an SVC volume as your lock disk, the HP node cannot access the lock disk if a
50-50 split in quorum occurs if the path to FIRST_CLUSTER_LOCK_PV becomes
unavailable.

When you are editing your Cluster Configuration ASCII file, ensure that the variable
FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node in your
cluster to ensure redundancy. For example, when you are configuring a two-node HP cluster,
ensure that FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and
through a separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.

5.11.5 Support for HP-UX with more than eight LUNs


HP-UX does not recognize more than eight LUNs per port that use the generic SCSI
behavior.

To accommodate this behavior, the SVC supports a “type” that is associated with a host. This
type can be set by using the svctask mkhost command and modified by using the svctask
chhost command. You can set the type to generic, which is the default for HP-UX.

When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, the
SVC behaves in the following way:
򐂰 Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
򐂰 When an inquiry command for any page is sent to LUN 0 by using Peripheral Device
Addressing, it is reported as Peripheral Device Type 0Ch (controller).
򐂰 When any command other than an inquiry is sent to LUN 0 by using Peripheral Device
Addressing, the SVC responds as an unmapped LUN 0 normally responds.
򐂰 When an inquiry is sent to LUN 0 by using Flat Space Addressing, it is reported as
Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh
Unknown Device Type.
򐂰 When an inquiry is sent to an unmapped LUN that is not LUN 0 by using Peripheral Device
Addressing, the Peripheral qualifier that is returned is 001b and the Peripheral Device type
is 1Fh (unknown or no device type). This response is in contrast to the behavior for
generic hosts, where peripheral Device Type 00h is returned.

5.12 Using the SDDDSM, SDDPCM, and SDD web interface


After the SDDDSM or SDD driver is installed, specific commands are available. To open a
command window for SDDDSM or SDD, select Start → Programs → Subsystem Device
Driver → Subsystem Device Driver Management from the desktop.

For more information about the command documentation for the various operating systems,
see the Multipath Subsystem Device Driver User’s Guide, S7000303:
http://ibm.com/support/docview.wss?uid=ssg1S7000303

238 Implementing the IBM System Storage SAN Volume Controller V7.4
You can also configure SDDDSM to offer a web interface that provides basic information.
Before this configuration can work, you must configure the web interface. SDDSRV does not
bind to any TCP/IP port, by default, but it allows port binding to be enabled or disabled
dynamically.

For all platforms except Linux, the multipath driver package includes an sddsrv.conf
template file that is named the sample_sddsrv.conf file. On all UNIX platforms except Linux,
the sample_sddsrv.conf file is in the /etc directory. On Windows platforms, the
sample_sddsrv.conf file is in the directory in which SDDDSM was installed.

You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as the sample_sddsrv.conf file by copying it and naming the copied file sddsrv.conf. You can
then dynamically change the port binding by modifying the parameters in the sddsrv.conf file
and changing the values of Enableport and Loopbackbind to True.

Figure 5-38 shows the start window of the multipath driver web interface.

Figure 5-38 SDD web interface

5.13 More information


For more information about host attachment and configuration to the SVC, see the IBM SAN
Volume Controller: Host Attachment User’s Guide, SC26-7905.

For more information about SDDDSM configuration, see the IBM System Storage Multipath
Subsystem Device Driver User’s Guide, S7000303, which is available from this website:
http://ibm.com/support/docview.wss?uid=ssg1S7000303

For more information about host attachment and storage subsystem attachment, and
troubleshooting, see the IBM SAN Volume Controller Knowledge Center at this website:
http://www-01.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp

Chapter 5. Host configuration 239


5.13.1 SAN Volume Controller storage subsystem attachment guidelines
It is beyond the scope of this book to describe the attachment to each subsystem that the
SVC supports. The following material was useful in the writing of this book and when working
with clients:
򐂰 For more information about how you can optimize your back-end storage to maximize your
performance on the SVC, see SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
򐂰 For more information about the guidelines and procedures to optimize the performance
that is available from your DS8000 storage subsystem when it is attached to the SVC, see
DS8000 Performance Monitoring and Tuning, SG24-801346, which is available at this
website:
http://www.redbooks.ibm.com/abstracts/sg248013.html?Open
򐂰 For more information about how to connect and configure your storage for optimized
performance on the SVC, see the IBM Midrange System Storage Implementation and
Best Practices Guide, SG24-6363, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg246363.html?Open
򐂰 For more information about specific considerations for attaching the XIV Storage System
to an SVC, see IBM XIV Storage System: Architecture, Implementation and Usage,
SG24-7659, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247659.html?Open

240 Implementing the IBM System Storage SAN Volume Controller V7.4
6

Chapter 6. Data migration


In this chapter, we describe how to migrate from a conventional storage infrastructure to a
virtualized storage infrastructure by using the IBM SAN Volume Controller (SVC). We also
explain how the SVC can be phased out of a virtualized storage infrastructure, for example,
after a trial period or after using the SVC as a data migration tool.

We also introduce and demonstrate the SVC support of the nondisruptive movement of
volumes between SVC I/O Groups, which is referred to as nondisruptive volume move or
multinode volume access.

We describe how to migrate from a fully allocated volume to a thin-provisioned volume by


using the volume mirroring feature and the thin-provisioned volume together.

This chapter includes the following topics:


򐂰 Migration overview
򐂰 Migration operations
򐂰 Functional overview of migration
򐂰 Migrating data from an image mode volume
򐂰 Data migration for Windows by using the SVC GUI
򐂰 Migrating Linux SAN disks to SVC disks
򐂰 Migrating ESX SAN disks to SVC disks
򐂰 Migrating AIX SAN disks to SVC volumes
򐂰 Using IBM SAN Volume Controller for storage migration
򐂰 Using volume mirroring and thin-provisioned volumes together

© Copyright IBM Corp. 2015. All rights reserved. 241


6.1 Migration overview
By using the SVC, you can change the mapping of volume extents to managed disk (MDisk)
extents, without interrupting host access to the volume. This functionality is used when
volume migrations are performed. It also applies to any volume that is defined on the SVC.

This functionality can be used for the following tasks:


򐂰 Migrating data from older back-end storage to SVC managed storage.
򐂰 Migrating data from one back-end controller to another back-end controller by using the
SVC as a data block mover, and afterward removing the SVC from the storage area
network (SAN).
򐂰 Migrating data from managed mode back into image mode before the SVC is removed
from a SAN.
򐂰 You can redistribute volumes to accomplish the following tasks:
– Moving workload onto newly installed storage
– Moving workload off old or failing storage before you decommission the storage
– Moving workload to rebalance a changed workload
򐂰 Migrating data from one SVC clustered system to another SVC system.
򐂰 Moving volumes’ I/O caching between SVC I/O Groups to redistribute workload across an
SVC clustered system.

6.2 Migration operations


You can perform migration at the volume level or the extent level, depending on the purpose
of the migration. The following migration tasks are supported:
򐂰 Migrating extents within a storage pool and redistributing the extents of a volume on the
MDisks within the same storage pool
򐂰 Migrating extents off an MDisk (which is removed from the storage pool) to other MDisks in
the same storage pool
򐂰 Migrating a volume from one storage pool to another storage pool
򐂰 Migrating a volume to change the virtualization type of the volume to image
򐂰 Moving a volume between I/O Groups non-disruptively

6.2.1 Migrating multiple extents within a storage pool


You can migrate many volume extents at one time by using the migrateexts command.

For more information about the migrateexts command parameters, see the following
resources:
򐂰 The SVC command-line interface help by entering the following command:
help migrateexts
򐂰 The IBM System Storage SAN Volume Controller Command-Line Interface User’s Guide,
GC27-2287

242 Implementing the IBM System Storage SAN Volume Controller V7.4
When this command is run, a number of extents are migrated from the source MDisk where
the extents of the specified volume are located to a defined target MDisk that must be part of
the same storage pool.

You can specify a number of migration threads to be used in parallel (1 - 4).

If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.

6.2.2 Migrating extents off an MDisk that is being deleted


When an MDisk is deleted from a storage pool by using the rmmdisk -force command, any
extents on the MDisk that are used by a volume are first migrated off the MDisk and onto
other MDisks in the storage pool before it is deleted.

In this case, the extents that must be migrated are moved onto the set of MDisks that are not
being deleted. This statement is true if multiple MDisks are being removed from the storage
pool at the same time.

If a volume uses one or more extents that must be moved as a result of running the rmmdisk
command, the virtualization type for that volume is set to striped (if it was previously
sequential or image).

If the MDisk is operating in image mode, the MDisk changes to managed mode while the
extents are being migrated. Upon deletion, it changes to unmanaged mode.

Using the -force flag: If the -force flag is not used and if volumes occupy extents on one
or more of the MDisks that are specified, the command fails.

When the -force flag is used and if volumes occupy extents on one or more of the MDisks
that are specified, all extents on the MDisks are migrated to the other MDisks in the
storage pool if enough free extents exist in the storage pool. The deletion of the MDisks is
postponed until all extents are migrated, which can take time. If insufficient free extents
exist in the storage pool, the command fails.

6.2.3 Migrating a volume between storage pools


An entire volume can be migrated from one storage pool to another storage pool by using the
migratevdisk command. A volume can be migrated between storage pools regardless of the
virtualization type (image, striped, or sequential), although it changes to the virtualization type
of striped. The command varies, depending on the type of migration, as shown in Table 6-1.

Table 6-1 Migration types and associated commands


Storage pool-to-storage pool type Command

Managed to managed migratevdisk

Image to managed migratevdisk

Managed to image migratetoimage

Image to image migratetoimage

Chapter 6. Data migration 243


Rule: For the migration to be acceptable, the source storage pool and the destination
storage pool must have the same extent size. Volume mirroring can also be used to
migrate a volume between storage pools. You can use this method if the extent sizes of the
two pools are not the same.

Figure 6-1 shows a managed volume migration to another storage pool.

Figure 6-1 Managed volume migration to another storage pool

In Figure 6-1, we show volume V3 migrating from Pool 2 to Pool 3.

Extents are allocated to the migrating volume from the set of MDisks in the target storage
pool by using the extent allocation algorithm.

The process can be prioritized by specifying the number of threads that are used in parallel
(1 - 4) while migrating; the use of only one thread puts the least background load on the
system.

The offline rules apply to both storage pools. Therefore, as shown in Figure 6-1, if any of the
M4, M5, M6, or M7 MDisks go offline, the V3 volume goes offline. If the M4 MDisk goes
offline, V3 and V5 go offline; however, V1, V2, V4, and V6 remain online.

If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.

During the move, the volume is listed as being a member of the original storage pool. For
configuration, the volume moves to the new storage pool instantaneously at the end of the
migration.

244 Implementing the IBM System Storage SAN Volume Controller V7.4
6.2.4 Migrating the volume to image mode
The facility to migrate a volume to an image mode volume can be combined with the
capability to migrate between storage pools. The source for the migration can be a managed
mode or an image mode volume. This combination of functions leads to the following
possibilities:
򐂰 Migrate image mode to image mode within a storage pool.
򐂰 Migrate managed mode to image mode within a storage pool.
򐂰 Migrate image mode to image mode between storage pools.
򐂰 Migrate managed mode to image mode between storage pools.

The following conditions must be met so that migration can occur:


򐂰 The destination MDisk must be greater than or equal to the size of the volume.
򐂰 The MDisk that is specified as the target must be in an unmanaged state at the time that
the command is run.
򐂰 If the migration is interrupted by a system recovery, the migration resumes after the
recovery completes.
򐂰 If the migration involves moving between storage pools, the volume behaves as described
in 6.2.3, “Migrating a volume between storage pools” on page 243.

Regardless of the mode in which the volume starts, the volume is reported as being in
managed mode during the migration. Also, both of the MDisks that are involved are reported
as being in image mode during the migration. Upon completion of the command, the volume
is classified as an image mode volume.

6.2.5 Non-disruptive Volume Move


One of the major enhancements that was introduced in the SVC code of version 6.4 is the
feature that is called Non-disruptive Volume Move (NDVM). In the previous versions of SVC
code, volumes were migrated between I/O Groups. However, this operation required I/O
operations to be quiesced to all volumes that were being migrated. After the migration was
complete, it was the new I/O Group that owned the unique access to the volume I/O
operations.

NDVM supports access to a single volume by all nodes in the clustered system. This feature
adds the concept of access versus caching I/O Groups. Although the access to a volume is
unlimited to any node of the system, a single I/O Group still controls the I/O caching. This
dynamic balancing of the SVC workload is helpful in situations where the natural growth of
the environment’s I/O demands forces the client and storage administrators to expand
hardware resources. With NDVM, you can instantly rebalance the workload to the volumes to
the new set of SVC nodes (I/O Group) without needing to quiesce or interrupt application
operations and easily lower the high utilization of the original I/O Group.

Before you move the volumes to a new I/O Group on the SVC system, ensure that the
following prerequisites are met:
򐂰 The host has access to the new I/O Group node ports through SAN zoning.
򐂰 The host is assigned to the new I/O Group on the SVC system level.
򐂰 The host operating system and multipathing software support the NDVM feature.

Chapter 6. Data migration 245


For more information about supported systems, see the Supported Hardware List, Device
Driver, Firmware, and Recommended Software Levels for the SVC, which is available at this
website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004946

In this example, we want to move one of the AIX host volumes from its existing I/O Group to
the recently added pair of SVC nodes. To perform the NDVM by using the SVC GUI, complete
the following steps:
1. Verify that the host is assigned to the source and target I/O Groups. Select Hosts from the
left menu pane (Figure 6-2) and confirm the # of I/O Groups column.

Figure 6-2 SVC Hosts I/O Groups assignment

2. Right-click the host and select Properties → Mapped Volumes. Verify the volumes and
caching I/O Group ownership, as shown in Figure 6-3.

Figure 6-3 Caching I/O Group ID

246 Implementing the IBM System Storage SAN Volume Controller V7.4
3. Now, we move lpar01_vol3 from the existing SVC I/O Group 0 to the new I/O Group 1.
From the left menu pane, select Volumes to see all of the volumes and optionally, filter the
output for the results that you want, as shown in Figure 6-4.

Figure 6-4 Select and filter volumes

4. Right-click volume lpar01_vol3, and in the menu, select Move Volume to a New I/O
Group.
5. The Move Volume to a New I/O Group wizard window starts (Figure 6-5). Click Next.

Figure 6-5 Move Volume to a New I/O Group wizard: Welcome window

6. Select I/O Group and Node → New Group (and optionally the preferred SVC node) or
leave Automatic for the default node assignment. Click Apply and Next, as shown in
Figure 6-6 on page 248.
You can see the progress of the task that is displayed in the task window and the SVC CLI
command sequence that is running the svctask movevdisk and svctask addvdiskaccess
commands.

Chapter 6. Data migration 247


Figure 6-6 Move Volume to a New I/O Group wizard: Select New I/O Group and Preferred Node

7. The task completion window opens. Next, you need to detect the new paths by the
selected host to switch over the I/O processing to the new I/O Group. Perform the path
detection that is based on the operating system-specific procedures, as shown in
Figure 6-7. Click Apply and Next.

Figure 6-7 Move Volume to a New I/O Group wizard: Detect New Paths window

8. The SVC removes the old I/O Group access to a volume by calling the svctask
rmvdiskaccess CLI command. After the task completes, close the task window.
9. The confirmation with information about the I/O Group move is displayed on the Move
Volume to a New I/O Group wizard window. Proceed to the Summary by clicking Next.

248 Implementing the IBM System Storage SAN Volume Controller V7.4
10.Review the summary information and click Finish. The volume is successfully moved to a
new I/O Group without I/O disruption on the host side. To verify that volume is now being
cached by the new I/O Group, verify the Caching I/O Group column on the Volumes
submenu, as shown in Figure 6-8.

Figure 6-8 New caching I/O Group

Note: For SVC code version 6.4 and higher, the CLI command svctask chvdisk is not
supported for migrating a volume between I/O Groups. Although svctask chvdisk still
modifies multiple properties of a volume, the new SVC CLI command movevdisk is used for
moving a volume between I/O Groups.

In certain conditions, you might still want to keep the volume accessible through multiple I/O
Groups. This function is possible, but only a single I/O Group can provide the caching of the
I/O to the volume. For modifying the access to a volume for more I/O Groups, use the SVC
CLI commands addvdiskaccess or rmvdiskaccess.

You can use the SVC GUI to modify more I/O Group access by selecting the volume,
right-clicking Properties → Edit, and then selecting the I/O Groups that you want by
selecting the Accessible I/O Groups property, as shown in Figure 6-9 on page 250.

Important: To change the caching I/O Group for a volume, use the movevdisk command.

Chapter 6. Data migration 249


Figure 6-9 Modifying accessible I/O Groups for a volume

6.2.6 Monitoring the migration progress


To monitor the progress of ongoing migrations, use the following CLI command:
svcinfo lsmigrate

To determine the extent allocation of MDisks and volumes, use the following commands:
򐂰 To list the volume IDs and the corresponding number of extents that the volumes occupy
on the queried MDisk, use the following CLI command:
svcinfo lsmdiskextent <mdiskname | mdisk_id>
򐂰 To list the MDisk IDs and the corresponding number of extents that the queried volumes
occupy on the listed MDisks, use the following CLI command:
svcinfo lsvdiskextent <vdiskname | vdisk_id>
򐂰 To list the number of available free extents on an MDisk, use the following CLI command:
svcinfo lsfreeextents <mdiskname | mdisk_id>

Important: After a migration is started, the migration cannot be stopped. The migration
runs to completion unless it is stopped or suspended by an error condition, or if the volume
that is being migrated is deleted.

If you want the ability to start, suspend, or cancel a migration or control the rate of
migration, consider the use of the volume mirroring function or migrating volumes between
storage pools.

250 Implementing the IBM System Storage SAN Volume Controller V7.4
6.3 Functional overview of migration
This section describes a functional view of data migration.

6.3.1 Parallelism
You can perform several of the following activities in parallel.

Each system
An SVC system supports up to 32 active concurrent instances of members of the set of the
following migration tasks:
򐂰 Migrate multiple extents
򐂰 Migrate between storage pools
򐂰 Migrate off a deleted MDisk
򐂰 Migrate to image mode

The following high-level migration tasks operate by scheduling single extent migrations:
򐂰 Up to 256 single extent migrations can run concurrently. This number is made up of single
extent migrations, which result from the operations previously listed.
򐂰 The Migrate Multiple Extents and Migrate Between storage pools commands support a
flag with which you can specify the number of parallel “threads” to use (1 - 4). This
parameter affects the number of extents that are concurrently migrated for that migration
operation. Therefore, if the thread value is set to 4, up to four extents can be migrated
concurrently for that operation (subject to other resource constraints).

Each MDisk
The SVC supports up to four concurrent single extent migrations per MDisk. This limit does
not consider whether the MDisk is the source or the destination. If more than four single
extent migrations are scheduled for a particular MDisk, further migrations are queued,
pending the completion of one of the currently running migrations.

6.3.2 Error handling


The migration is suspended or stopped if one of the following conditions exists:
򐂰 A medium error occurs on a read from the source.
򐂰 The destination’s medium error table is full.
򐂰 An I/O error occurs on a read from the source repeatedly.
򐂰 The MDisks go offline repeatedly.

The migration is only suspended if any of the following conditions exist. Otherwise, the
migration is stopped:
򐂰 The migration occurs between storage pools, and the migration progressed beyond the
first extent.
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves a volume that is spanning storage pools, which is not a valid
configuration other than during a migration.
򐂰 The migration is a Migrate to Image Mode (even if it is processing the first extent).
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves the volume in an inconsistent state.
򐂰 A migration is waiting for a metadata checkpoint that failed.

Chapter 6. Data migration 251


If a migration is stopped and if any migrations are queued while waiting for the use of the
MDisk for migration, these migrations now start. However, if a migration is suspended, the
migration continues to use resources, and so, another migration is not started.

The SVC attempts to resume the migration if the error log entry is marked as fixed by using
the CLI or the GUI. If the error condition no longer exists, the migration proceeds. The
migration might resume on a node other than the node that started the migration.

6.3.3 Migration algorithm


This section describes the effect of the migration algorithm.

Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MBs. In this
description, this unit is referred to as a chunk.

We describe the algorithm that is used to migrate an extent:


1. Pause (pause means to queue all new I/O requests in the virtualization layer in the SVC
and to wait for all outstanding requests to complete) all I/O on the source MDisk on all
nodes in the SVC system. The I/O to other extents is unaffected.
2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific
chunk that is being migrated. Writes to the extent are mirrored to the source and the
destination.
3. On the node that is performing the migration, for each 256 KB section of the chunk:
– Synchronously read 256 KB from the source
– Synchronously write 256 KB to the target
4. After the entire chunk is copied to the destination, repeat the process for the next chunk
within the extent.
5. After the entire extent is migrated, pause all I/O to the extent that is being migrated,
perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to
the destination, and stop mirroring writes (writes only to the destination).
6. If the checkpoint fails, the I/O is unpaused.

During the migration, the extent can be divided into the following regions, as shown in
Figure 6-10 on page 253:
򐂰 Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the
virtualization layer that is waiting for the chunk to be copied.
򐂰 Reads to Region A are directed to the destination because this data was copied. Writes to
Region A are written to the source and the destination extent to maintain the integrity of
the source extent.
򐂰 Reads and writes to Region C are directed to the source because this region is not yet
migrated.

252 Implementing the IBM System Storage SAN Volume Controller V7.4
The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack, such as cache
destages, are held back. If the back-end storage is operating with significant latency, this
operation might take time (minutes) to complete, which can have an adverse affect on the
overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is
still active after 1 minute, the migration is paused for 30 seconds. During this time, writes to
the chunk can proceed. After 30 seconds, the migration of the chunk is resumed. This
algorithm is repeated as many times as necessary to complete the migration of the chunk, as
shown in Figure 6-10.

Managed Disk Extents

Extent N-1 Extent N Extent N+1

Region A Region B Region C


(already (copying) (yet to be copied)
copied) reads/writes reads/writes go
reads/writes paused to source
go to
destination

Not to scale
16 MiB
Figure 6-10 Migrating an extent

The SVC ensures read stability during data migrations, even if the data migration is stopped
by a node reset or a system shutdown. This read stability is possible because the SVC
disallows writes on all nodes to the area that is being copied. On a failure, the extent
migration is restarted from the beginning. At the conclusion of the operation, we see the
following results:
򐂰 Extents were migrated in 16 MiB chunks, one chunk at a time.
򐂰 Chunks are copied in progress or not copied.
򐂰 When the extent is finished, its new location is saved.

Figure 6-11 shows the data migration and write operation relationship.

Figure 6-11 Migration and write operation relationship

Chapter 6. Data migration 253


6.4 Migrating data from an image mode volume
This section describes migrating data from an image mode volume to a fully managed
volume. This type of migration is used to take an existing host logical unit number (LUN) and
move it into the virtualization environment as provided by the SVC system.

6.4.1 Image mode volume migration concept


First, we describe the concepts that are associated with this operation.

MDisk modes
The following MDisk modes are available:
򐂰 Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and it has no metadata that is
stored on it. The SVC does not write to an MDisk that is in unmanaged mode except when
it attempts to change the mode of the MDisk to one of the other modes.
򐂰 Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume
with no virtualization. Image mode volumes have a minimum size of one block (512 bytes)
and always occupy at least one extent. An image mode MDisk is associated with exactly
one volume.
򐂰 Managed mode MDisk
Managed mode MDisks contribute extents to the pool of available extents in the storage
pool. Zero or more managed mode volumes might use these extents.

Transitions between the modes


The following state transitions can occur to an MDisk (Figure 6-12 on page 255):
򐂰 Unmanaged mode to managed mode
This transition occurs when an MDisk is added to a storage pool, which makes the MDisk
eligible for the allocation of data and metadata extents.
򐂰 Managed mode to unmanaged mode
This transition occurs when an MDisk is removed from a storage pool.
򐂰 Unmanaged mode to image mode
This transition occurs when an image mode MDisk is created on an MDisk that was
previously unmanaged. It also occurs when an MDisk is used as the target for a migration
to image mode.
򐂰 Image mode to unmanaged mode
This transition can occur in the following ways:
– When an image mode volume is deleted, the MDisk that supported the volume
becomes unmanaged.
– When an image mode volume is migrated in image mode to another MDisk, the MDisk
that is being migrated from remains in image mode until all data is moved off it. It then
transitions to unmanaged mode.

254 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Image mode to managed mode
This transition occurs when the image mode volume that is using the MDisk is migrated
into managed mode.
򐂰 Managed mode to image mode is impossible
No operation is available to take an MDisk directly from managed mode to image mode.
You can achieve this transition by performing operations that convert the MDisk to
unmanaged mode and then to image mode.

add to group

Managed
Not in group
mode
remove from group

delete image
mode vdisk

complete migrate start migrate to


managed mode

create image
mode vdisk

Migrating to
Image mode
image mode start migrate to image mode

Figure 6-12 Various states of a volume

Image mode volumes have the special property that the last extent in the volume can be a
partial extent. Managed mode disks do not have this property.

To perform any type of migration activity on an image mode volume, the image mode disk first
must be converted into a managed mode disk. If the image mode disk has a partial last
extent, this last extent in the image mode volume must be the first extent to be migrated. This
migration is handled as a special case.

After this special migration operation occurs, the volume becomes a managed mode volume
and it is treated in the same way as any other managed mode volume. If the image mode disk
does not have a partial last extent, no special processing is performed. The image mode
volume is changed into a managed mode volume and it is treated in the same way as any
other managed mode volume.

After data is migrated off a partial extent, data cannot be migrated back onto the partial
extent.

Chapter 6. Data migration 255


6.4.2 Migration tips
The following methods are available to migrate an image mode volume to a managed mode
volume:
򐂰 If your image mode volume is in the same storage pool as the MDisks on which you want
to migrate the extents, you can perform one of these migrations:
– Migrate a single extent. You must migrate the last extent of the image mode volume
(number N-1).
– Migrate multiple extents.
– Migrate all of the in-use extents from an MDisk. Migrate extents off an MDisk that is
being deleted.
򐂰 If you have two storage pools (one storage pool for the image mode volume, and one
storage pool for the managed mode volumes), you can migrate a volume from one storage
pool to another storage pool.

Have one storage pool for all the image mode volumes and other storage pools for the
managed mode volumes, which use the migrate volume facility.

Be sure to verify that enough extents are available in the target storage pool.

6.5 Data migration for Windows by using the SVC GUI


In this section, we move two LUNs from a Microsoft Windows Server 2008 server that is
attached to a DS3400 storage subsystem over to the SVC. The migration examples include
the following scenarios:
򐂰 Moving a Microsoft server’s SAN LUNs from a storage subsystem and virtualizing those
same LUNs through the SVC
Use this method when you are introducing the SVC into your environment. This section
shows that your host downtime is only a few minutes while you remap and remask disks
by using your storage subsystem LUN management tool. For more information about this
step, see 6.5.2, “Adding the SAN Volume Controller between the host system and the DS
3400” on page 259.
򐂰 Migrating your image mode volume to a volume while your host is still running and
servicing your business application
Use this method if you are removing a storage subsystem from your SAN environment, or
if you want to move the data onto LUNs that are more appropriate for the type of data that
is stored on those LUNs, considering availability, performance, and redundancy. For more
information about this step, see 6.5.6, “Migrating the volume from image mode to image
mode” on page 283.
򐂰 Migrating your volume to an image mode volume
Use this method if you are removing the SVC from your SAN environment after a trial
period. For more information about this step, see 6.5.5, “Migrating a volume from
managed mode to image mode” on page 279.
򐂰 Moving an image mode volume to another image mode volume
Use this method to migrate data from one storage subsystem to another storage
subsystem. For more information, see 6.6.6, “Migrating the volumes to image mode
volumes” on page 310.

256 Implementing the IBM System Storage SAN Volume Controller V7.4
You can use these methods individually or together to migrate your server’s LUNs from one
storage subsystem to another storage subsystem by using the SVC as your migration tool.

The only downtime that is required for these methods is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.

6.5.1 Windows Server 2008 host system connected directly to the DS 3400
In our example configuration, we use a Windows Server 2008 host and a DS 3400 storage
subsystem. The host has two LUNs (drives X and Y). The two LUNs are part of one DS 3400
array. Before the migration, LUN masking is defined in the DS 3400 to give access to the
Windows Server 2008 host system for the volumes from DS 3400 labeled X and Y
(Figure 6-14 on page 258).

Figure 6-13 shows the starting zoning scenario.

Zoning for Migration Scenarios

W2k8

SAN

IBM or OEM Green Zone


Storage
Subsystem

Figure 6-13 Starting zoning scenario

Figure 6-14 on page 258 shows the two LUNs (drive X and drive Y).

Chapter 6. Data migration 257


Figure 6-14 Drives X and Y

Figure 6-15 shows the properties of one of the DS 3400 disks that uses the Subsystem
Device Driver DSM (SDDDSM). The disk appears as an FAStT Multi-Path Disk Device.

Figure 6-15 Disk properties

258 Implementing the IBM System Storage SAN Volume Controller V7.4
6.5.2 Adding the SAN Volume Controller between the host system and the
DS 3400
Figure 6-16 shows the new environment with the SVC and a second storage subsystem that
is attached to the SAN. The second storage subsystem is not required to migrate to the SAN.
However, we show in the following examples that it is possible to move data across storage
subsystems without any host downtime.

Zoning for Migration Scenarios

W2k8

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-16 Add the SVC and second storage subsystem

To add the SVC between the host system and the DS 3400 storage subsystem, complete the
following steps:
1. Check that you installed the supported device drivers on your host system.
2. Check that your SAN environment fulfills the supported zoning configurations.
3. Shut down the host.
4. Change the LUN masking in the DS 3400. Mask the LUNs to the SVC, and remove the
masking for the host.
Figure 6-17 on page 260 shows the two LUNs (win2008_lun_01 and win2008_lun_02)
with LUN IDs 2 and 3 that are remapped to the SVC Host ITSO_SVC_DH8.

Chapter 6. Data migration 259


Figure 6-17 LUNs remapped

Important: To avoid potential data loss, back up all the data that is stored on your
external storage before you use the wizard.

5. Log in to your SVC Console and open Pools → System Migration, as shown in
Figure 6-18.

Figure 6-18 Pools and System Migration

6. Click Start New Migration, which starts a wizard, as shown in Figure 6-19.

Figure 6-19 Start New Migration

7. Follow the Storage Migration Wizard, as shown in Figure 6-20 on page 261, and then click
Next.

260 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-20 Storage Migration Wizard (Step 1 of 8)

Chapter 6. Data migration 261


8. Figure 6-21 shows the Prepare Environment for Migration information window. Click Next.

Figure 6-21 Storage Migration Wizard: Preparing the environment for migration (Step 2 of 8)

262 Implementing the IBM System Storage SAN Volume Controller V7.4
9. Click Next to complete the storage mapping, as shown in Figure 6-22.

Figure 6-22 Storage Migration Wizard: Mapping storage (Step 3 of 8)

10.Figure 6-23 shows the device discovery. Click Close.

Figure 6-23 Discovering devices

Chapter 6. Data migration 263


11.Figure 6-24 shows the available MDisks for migration.

Figure 6-24 Storage Migration Wizard (Step 4 of 8)

12.Mark both MDisks (mdisk10 and mdisk12) for migrating, as shown in Figure 6-25, and
then click Next.

Figure 6-25 Storage Migration Wizard: Selecting MDisks for migration

264 Implementing the IBM System Storage SAN Volume Controller V7.4
13.Figure 6-26 shows the MDisk import process. During the import process, a storage pool is
automatically created, in our case, MigrationPool_8192. You can see that the command
that is issued by the wizard creates an image mode volume with a one-to-one mapping to
mdisk10 and mdisk12. Click Close to continue.

Figure 6-26 Storage Migration Wizard: MDisk import process (Step 5 of 8)

14.To create a host object to which we map the volume later, click Add Host, as shown in
Figure 6-27.

Figure 6-27 Storage Migration Wizard: Adding a host

Chapter 6. Data migration 265


15.Figure 6-28 shows the empty fields that we must complete to match our host
requirements.

Figure 6-28 Storage Migration Wizard: Add Host information fields

16.Enter the host name that you want to use for the host, add the Fibre Channel (FC) port,
and select a host type. In our case, the host name is Win_2008. Click Add Host, as shown
in Figure 6-29 on page 267.

266 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-29 Storage Migration Wizard: Completed host information

17.Figure 6-30 shows the progress of creating a host. Click Close.

Figure 6-30 Progress status: Creating a host

Chapter 6. Data migration 267


18.Figure 6-31 shows that the host was added successfully. Click Next to continue.

Figure 6-31 Storage Migration Wizard: Adding a host was successful

19.Figure 6-32 shows all of the available volumes to map to a host.

Figure 6-32 Storage Migration Wizard: Volumes that are available for mapping (Step 6 of 8)

268 Implementing the IBM System Storage SAN Volume Controller V7.4
20.Mark both volumes and click Map to Host, as shown in Figure 6-33.

Figure 6-33 Storage Migration Wizard: Mapping volumes to host

21.Modify the host mapping by choosing a host by using the drop-down menu, as shown in
Figure 6-34. Click Next.

Figure 6-34 Storage Migration Wizard: Modify Host Mappings

22.The right side of Figure 6-35 on page 270 shows the volumes that can be marked to map
to your host. Mark both volumes and click Apply.

Chapter 6. Data migration 269


Figure 6-35 Storage Migration Wizard: Volume mapping to the host

23.Figure 6-36 shows the progress of the volume mapping to the host. Click Close when you
are finished.

Figure 6-36 Modify Mappings: Task completed

24.After the volume to host mapping task is completed, a host that is beneath the column
heading Host Mappings is shown as marked Yes (Figure 6-37 on page 271). Click Next.

270 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-37 Storage Migration Wizard: Map Volumes to Hosts

25.Select the storage pool that you want to use for migration, in our case, DS3400_pool1, as
shown in Figure 6-38. Click Next.

Figure 6-38 Storage Migration Wizard: Selecting a storage pool to use for migration (Step 7 of 8)

Chapter 6. Data migration 271


26.Migration starts automatically by performing a volume copy, as shown in Figure 6-39.

Figure 6-39 Start Migration: Task completed

27.The window that is shown in Figure 6-40 opens. This window states that the migration has
begun. Click Finish.

Figure 6-40 Storage Migration Wizard: Data migration begins (Step 8 of 8)

28.The window that is shown in Figure 6-41 opens automatically to show the progress of the
migration.

Figure 6-41 Progress of the migration process

272 Implementing the IBM System Storage SAN Volume Controller V7.4
29.Click Volumes → Volumes by host, as shown in Figure 6-42, to see all the volumes that
are served by the new host for this migration step.

Figure 6-42 Selecting to view volumes by host

30.Figure 6-43 shows all the volumes (copy0* and copy1) that are served by the newly
created host.

Figure 6-43 Volumes that are served by the new host

As you can see in Figure 6-43, the migrated volume is a mirrored volume with one copy on
the image mode pool and another copy in a managed mode storage pool. The administrator
can choose to leave the volume or split the initial copy from the mirror.

6.5.3 Importing the migrated disks into an online Windows Server 2008 host
To import the migrated disks into an online Windows Server 2008 Server host, complete the
following steps:
1. Start the Windows Server 2008 host system again, go to Disk Management of the
DS 3400 disks and see the new disk properties that changed to a 2145 Multi-Path Disk
Device, as shown in Figure 6-44 on page 274.

Chapter 6. Data migration 273


Figure 6-44 Disk Management: See the new disk properties

Figure 6-45 shows the Disk Management window.

Figure 6-45 Migrated disks are available

2. Select Start → Programs → Subsystem Device Driver DSM → Subsystem Device


Driver DSM to open the SDDDSM command-line utility, as shown in see Figure 6-46 on
page 275.

274 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-46 Subsystem Device Driver DSM CLI

3. Run the datapath query device command to check whether all paths are available as
planned in your SAN environment (Example 6-1).

Example 6-1 The datapath query device command


Windows Server
Copyright (c) 2007 Microsoft Corporation. All rights reserved.

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 2

DEV#: 8 DEVICE NAME: Disk4 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801FD80284000000000000011 LUN SIZE: 10.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 76 0
1 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 98 0
2 * Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 0 0
3 * Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 0 0
4 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 87 0
5 * Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 72 0
7 * Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 0 0

DEV#: 9 DEVICE NAME: Disk5 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801FD80284000000000000012 LUN SIZE: 20.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 0 0
1 * Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 0 0
2 Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 115 0
3 Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 88 0
4 * Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 0 0
5 Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 104 0
6 * Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 0 0
7 Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 105 0

C:\Program Files\IBM\SDDDSM>

Chapter 6. Data migration 275


6.5.4 Adding IBM SAN Volume Controller between the host and the DS 3400 by
using the CLI
In this section, we use only CLI commands to add direct-attached storage to the SVC’s
managed storage. For more information about our preparation of the environment, see 6.5.1,
“Windows Server 2008 host system connected directly to the DS 3400” on page 257.

Verifying the currently used storage pools


Verify the currently used storage pool on the SVC to discover the storage pool’s free capacity,
as shown in Example 6-2.

Example 6-2 Storage pool’s free capacity


IBM_2145:ITSO_SVC2:ITSO_admin>svcinfo lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_
capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st
atus:compression_active:compression_virtual_capacity:compression_compressed_capaci
ty:compression_uncompressed_capacity:parent_mdisk_grp_id:parent_mdisk_grp_name:chi
ld_mdisk_grp_count:child_mdisk_grp_capacity:type:encrypt
0:CompressedV7000:online:3:0:90.00GB:1024:90.00GB:0.00MB:0.00MB:0.00MB:0:80:auto:b
alanced:no:0.00MB:0.00MB:0.00MB:0:CompressedV7000:0:0.00MB:parent:no
1:test_pool_01:online:3:0:381.00GB:1024:381.00GB:0.00MB:0.00MB:0.00MB:0:80:off:ina
ctive:no:0.00MB:0.00MB:0.00MB:1:test_pool_01:0:0.00MB:parent:no
2:MigrationPool_8192:online:2:2:30.00GB:8192:0:30.00GB:30.00GB:30.00GB:100:0:auto:
balanced:no:0.00MB:0.00MB:0.00MB:2:MigrationPool_8192:0:0.00MB:parent:no
3:DS3400_pool1:online:1:1:100.00GB:1024:80.00GB:20.00GB:20.00GB:20.00GB:20:80:auto
:balanced:no:0.00MB:0.00MB:0.00MB:3:DS3400_pool1:0:0.00MB:parent:no
IBM_2145:ITSO_SVC2:ITSO_admin>

Creating a storage pool


When we move the two LUNs to the SVC, we use them initially in image mode. Therefore, we
need a storage pool to hold those disks.

First, we add a new empty storage pool (in our case imagepool) for the import of the LUNs, as
shown in Example 6-3. It is better to have a separate pool in case a problem occurs during
the import. That way, the import process cannot affect the other storage pools.

Example 6-3 Adding a storage pool

IBM_2145:ITSO_SVC2:ITSO_admin>svctask mkmdiskgrp -name imagepool -tier generic_hdd


-easytier off -ext 256
MDisk Group, id [4], successfully created
IBM_2145:ITSO_SVC2:ITSO_admin>

Verifying the creation of the new storage pool


Now, we verify whether the new storage pool was added correctly, as shown in Example 6-4.

Example 6-4 Verifying the new storage pool

IBM_2145:ITSO_SVC2:ITSO_admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_
capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st
atus:compression_active:compression_virtual_capacity:compression_compressed_capaci

276 Implementing the IBM System Storage SAN Volume Controller V7.4
ty:compression_uncompressed_capacity:parent_mdisk_grp_id:parent_mdisk_grp_name:chi
ld_mdisk_grp_count:child_mdisk_grp_capacity:type:encrypt
0:CompressedV7000:online:3:0:90.00GB:1024:90.00GB:0.00MB:0.00MB:0.00MB:0:80:auto:b
alanced:no:0.00MB:0.00MB:0.00MB:0:CompressedV7000:0:0.00MB:parent:no
1:test_pool_01:online:3:0:381.00GB:1024:381.00GB:0.00MB:0.00MB:0.00MB:0:80:off:ina
ctive:no:0.00MB:0.00MB:0.00MB:1:test_pool_01:0:0.00MB:parent:no
2:MigrationPool_8192:online:2:2:30.00GB:8192:0:30.00GB:30.00GB:30.00GB:100:0:auto:
balanced:no:0.00MB:0.00MB:0.00MB:2:MigrationPool_8192:0:0.00MB:parent:no
3:DS3400_pool1:online:1:1:100.00GB:1024:80.00GB:20.00GB:20.00GB:20.00GB:20:80:auto
:balanced:no:0.00MB:0.00MB:0.00MB:3:DS3400_pool1:0:0.00MB:parent:no
4:imagepool:online:0:0:0:256:0:0.00MB:0.00MB:0.00MB:0:0:off:inactive:no:0.00MB:0.0
0MB:0.00MB:4:imagepool:0:0.00MB:parent:
IBM_2145:ITSO_SVC2:ITSO_admin>

Creating the image volumes


As shown in Example 6-5, we must create two image volumes (image1 and image2) within our
storage pool imagepool. We need one for each MDisk to import LUNs from the storage
controller to the SVC.

Example 6-5 Creating the image volumes


IBM_2145:ITSO_SVC2:ITSO_admin>svctask mkvdisk -name image1 -iogrp 0 -mdiskgrp
imagepool -vtype image -mdisk mdisk13 -syncrate 80
Virtual Disk, id [0], successfully created
IBM_2145:ITSO_SVC2:ITSO_admin>svctask mkvdisk -name image2 -iogrp 0 -mdiskgrp
imagepool -vtype image -mdisk mdisk14 -syncrate 80
Virtual Disk, id [1], successfully created
IBM_2145:ITSO_SVC2:ITSO_admin>

Verifying the image volumes


Now, we check again whether the volumes are created within the storage pool imagepool, as
shown in Example 6-6.

Example 6-6 Verifying the image volumes


IBM_2145:ITSO_SVC2:ITSO_admin>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change compressed_copy_count
parent_mdisk_grp_id parent_mdisk_grp_name
0 image1 0 io_grp0 online 4 imagepool 10.00GB
image 6005076801FD80284000000000000015 0 1
not_empty 0 no 0 4
imagepool
1 image2 0 io_grp0 online 4 imagepool 20.00GB
image 6005076801FD80284000000000000016 0 1
not_empty 0 no 0 4
imagepool
IBM_2145:ITSO_SVC2:ITSO_admin>

Creating the host


We check whether our host exists or if we need to create it, as shown in Example 6-7 on
page 278. In our case, the server is created.

Chapter 6. Data migration 277


Example 6-7 Listing the host
IBM_2145:ITSO_SVC2:ITSO_admin>svcinfo lshost
id name port_count iogrp_count status
0 Win_2008 2 4 online
IBM_2145:ITSO_SVC2:ITSO_admin>

Mapping the image volumes to the host


Next, we map the image volumes to host Win_2008, as shown in Example 6-8. This mapping
is also known as LUN masking.

Example 6-8 Mapping the volumes


IBM_2145:ITSO_SVC2:ITSO_admin>svctask mkvdiskhostmap -force -host Win_2008 image2
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO_SVC2:ITSO_admin>svctask mkvdiskhostmap -force -host Win_2008 image1
Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO_SVC2:ITSO_admin>

Adding the image volumes to a storage pool


Add the image volumes to storage pool DS3400_pool1 (as shown in Example 6-9) to have
them mapped as fully allocated volumes that are managed by the SVC.

Example 6-9 Adding the volumes to the storage pool


IBM_2145:ITSO_SVC2:ITSO_admin>svctask addvdiskcopy -mdiskgrp DS3400_pool1 image1
Vdisk [0] copy [1] successfully created
IBM_2145:ITSO_SVC2:ITSO_admin>svctask addvdiskcopy -mdiskgrp DS3400_pool1 image2
Vdisk [1] copy [1] successfully created
IBM_2145:ITSO_SVC2:ITSO_admin>

Checking the status of the volumes


Both volumes now have a second copy, which is shown as type many in Example 6-10. Both
volumes are available to be used by the host.

Example 6-10 Status check


IBM_2145:ITSO_SVC2:ITSO_admin>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change compressed_copy_count
parent_mdisk_grp_id parent_mdisk_grp_name
0 image1 0 io_grp0 online many many 10.00GB
many 6005076801FD80284000000000000015 0 2
not_empty 0 no 0 many
many
1 image2 0 io_grp0 online many many 20.00GB
many 6005076801FD80284000000000000016 0 2
not_empty 0 no 0 many
many
IBM_2145:ITSO_SVC2:ITSO_admin>

278 Implementing the IBM System Storage SAN Volume Controller V7.4
6.5.5 Migrating a volume from managed mode to image mode
Complete the following steps to migrate a managed volume to an image mode volume:
1. Create an empty storage pool for each volume that you want to migrate to image mode.
These storage pools host the target MDisk that you map later to your server at the end of
the migration.
2. Click Pools → MDisks by Pools to create a pool from the drop-down menu, as shown in
Figure 6-47.

Figure 6-47 Selecting Pools to create a pool

3. To create an empty storage pool for migration, complete the following steps:
a. You are prompted for the pool name, extent size, and warning threshold, as shown in
Figure 6-48. After you enter the information, click Next.

Figure 6-48 Creating a storage pool

b. You are then prompted to optionally select the MDisk to include in the storage pool, as
shown in Figure 6-49 on page 280. Click Create.

Chapter 6. Data migration 279


Figure 6-49 Selecting the MDisk

4. As shown in Figure 6-50, you are reminded that an empty storage pool was created. Click
Yes.

Figure 6-50 Reminder

5. Figure 6-51 on page 281 shows the progress status as the system creates a storage pool
for migration. Click Close to continue.

280 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-51 Progress status

6. From the Create Volumes panel, select the volume that you want to migrate to image
mode and select Export to Image Mode from the drop-down menu, as shown in
Figure 6-52.

Figure 6-52 Selecting the volume

7. Select the MDisk onto which you want to migrate the volume, as shown in Figure 6-53 on
page 282. Click Next.

Chapter 6. Data migration 281


Figure 6-53 Migrate to image mode

8. Select a storage pool into which the image mode volume is placed after the migration
completes, in our case, the For Migration storage pool. Click Finish, as shown in
Figure 6-54.

Figure 6-54 Select storage pool

282 Implementing the IBM System Storage SAN Volume Controller V7.4
9. The volume is exported to image mode and placed in the For Migration storage pool, as
shown in Figure 6-55. Click Close.

Figure 6-55 Export Volume to image process

10.Browse to Pools → MDisk by Pools. Click the plus sign (+) (expand icon) to the left of the
name. Now, mdisk12 is an image mode MDisk, as shown in Figure 6-56.

Figure 6-56 MDisk is in image mode

11.Repeat these steps for every volume that you want to migrate to an image mode volume.
12.Delete the image mode data from the SVC by using the procedure that is described in
6.5.7, “Removing image mode data from the IBM SAN Volume Controller” on page 291.

6.5.6 Migrating the volume from image mode to image mode


Use the process that is described in this section to move image mode volumes from one
storage subsystem to another storage subsystem without going through the SVC fully
managed mode. The data stays available for the applications during this migration. This
procedure is nearly the same as the procedure that is described in 6.5.5, “Migrating a volume
from managed mode to image mode” on page 279.

Chapter 6. Data migration 283


In our example, we migrate the Windows Server W2K8 volume to another disk subsystem as
an image mode volume. The second storage subsystem is a V7000_Gen2; a new LUN is
configured on the storage and mapped to the SVC system. The LUN is available to the SVC
as an unmanaged mdisk15, as shown in Figure 6-57.

Figure 6-57 Unmanaged disk on a storage subsystem

To migrate the image mode volume to another image mode volume, complete the following
steps:
1. Mark the unmanaged mdisk15 and click Actions or right-click and select Import from the
list, as shown in Figure 6-58.

Figure 6-58 Import the unmanaged MDisk into the SVC

2. The Import Wizard window opens, which describes the process of importing the MDisk
and mapping an image mode volume to it, as shown in Figure 6-59. Enable the caching
and click Next.

Figure 6-59 Import Wizard (Step1 of 2)

3. Select a temporary pool because you do not want to migrate the volume into an SVC
managed volume pool. Select the extent size from the drop-down menu and click Finish,
as shown in Figure 6-60 on page 285.

284 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-60 Import Wizard (Step 2 of 2)

4. The import process starts (as shown in Figure 6-61) by creating a temporary storage pool
MigrationPool_1024 (1 GiB) and an image volume. Click Close to continue.

Figure 6-61 Import of MDisk and creation of temporary storage pool MigrationPool_1024

5. As shown in Figure 6-62, an image mode mdisk15 now shows with the import controller
name and SCSI ID as its name.

Figure 6-62 Imported mdisk15 within the created storage pool

6. Create a storage pool Migration_Out with the same extent size (1 GiB) as the
automatically created storage pool MigrationPool_1024 for transferring the image mode
disk. Go to Pools → MDisks by Pools, as shown in Figure 6-63 on page 286.

Chapter 6. Data migration 285


Figure 6-63 MDisk by Pools

7. Click Create Pool to create an empty storage pool and give your new storage pool the
meaningful name Migration_Out. Click the Advanced Settings drop-down menu. Choose
1.00 GiB as the extent size for your new storage pool, as shown in Figure 6-64.Click Next
to continue.

Figure 6-64 Creating an empty storage pool with a 1 GiB extent size (Step 1 of 2)

8. Figure 6-65 on page 287 shows a storage pool window with several MDisks. Without
selecting an MDisk, click Create to continue to create an empty storage pool.

286 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-65 Creating an empty storage pool (Step 2 of 2)

9. The warning that is shown in Figure 6-66 reminds you that an empty storage pool is
created. Click Yes to continue.

Figure 6-66 Warning message: Creating an empty storage pool

10.Figure 6-67 on page 288 shows the progress of creating the storage pool Migration_Out.
Click Close to continue.

Chapter 6. Data migration 287


Figure 6-67 Progress of storage pool creation

11.Now, the empty storage pool for the image to image migration is created. Go to Pools →
MDisks by Pools, as shown Figure 6-68.

Figure 6-68 Empty storage pool Migration_Out

12.Go to Volumes → Volumes by Pool, as shown in Figure 6-69.

Figure 6-69 Volumes by Pool

288 Implementing the IBM System Storage SAN Volume Controller V7.4
13.In the left pane, select the storage pool of the imported disk, which is called
MigrationPool_1024. Then, mark the image disk that you want to migrate out and select
Actions. From the drop-down menu, select Export to Image Mode, as shown in
Figure 6-70.

Figure 6-70 Export to Image Mode

14.Select the target MDisk mdisk13 on the new disk controller to which you want to migrate.
Click Next, as shown in Figure 6-71.

Figure 6-71 Selecting the target MDisk

15.Select the target Migration_Out (empty) storage pool, as shown in Figure 6-72 on
page 290. Click Finish.

Chapter 6. Data migration 289


Figure 6-72 Selecting the target storage pool

16.Figure 6-73 shows the progress status of the Export Volume to Image process. Click
Close to continue.

Figure 6-73 Export Volume to Image progress status

17.Figure 6-74 on page 291 shows that the MDisk location changed as expected to the new
storage pool Migration_Out.

290 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-74 Image disk migrated to new storage pool

18.Repeat these steps for all image mode volumes that you want to migrate.
19.If you want to delete the data from the SVC, use the procedure that is described in 6.5.7,
“Removing image mode data from the IBM SAN Volume Controller” on page 291.

6.5.7 Removing image mode data from the IBM SAN Volume Controller
If your data is in an image mode volume inside the SVC, you can remove the volume from the
SVC, which allows you to free the original LUN for reuse. The following sections describe how
to migrate data to an image mode volume. Depending on your environment, you might need
to complete the following procedures before you delete the image volume:
򐂰 6.5.5, “Migrating a volume from managed mode to image mode” on page 279
򐂰 6.5.6, “Migrating the volume from image mode to image mode” on page 283

To remove the image mode volume from the SVC, we use the delete vdisk command.

If the command succeeds on an image mode volume, the underlying back-end storage
controller is consistent with the data that a host might previously read from the image mode
volume. That is, all fast write data was flushed to the underlying LUN. Deleting an image
mode volume causes the MDisk that is associated with the volume to be ejected from the
storage pool. The mode of the MDisk is returned to unmanaged.

Image mode volumes only: This situation applies to image mode volumes only. If you
delete a normal volume, all of the data is also deleted.

As shown in Example 6-1 on page 275, the SAN disks are on the SVC.

Check that you installed the supported device drivers on your host system.

To switch back to the storage subsystem, complete the following steps:


1. Shut down your host system.
2. Open the Volumes by Host window to see which volumes are mapped to your host, as
shown in Figure 6-75 on page 292.

Chapter 6. Data migration 291


Figure 6-75 Volume by host mapping

3. Check your Host and select your volume. Then, right-click and select Unmap all Hosts,
as shown in Figure 6-76.

Figure 6-76 Unmap the volume from the host

4. Verify your unmap process, as shown in Figure 6-77, and click Unmap.

Figure 6-77 Verifying your unmapping process

5. Repeat steps 3 - 5 for every image mode volume that you want to remove from the SVC.

292 Implementing the IBM System Storage SAN Volume Controller V7.4
6. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN
masking, and add the host to the masking.
7. Power on your host system.

6.5.8 Mapping the free disks onto the Windows Server 2008 server
To detect and map the disks that were freed from SVC management, go to Windows Server
2008 and complete the following steps:
1. Using your DS 3400 Storage Manager interface, remap the two LUNs that were MDisks
back to your Windows Server 2008 server.
2. Open your Device Manager window. Figure 6-78 shows that the LUNs are now back to an
IBM 1726-4xx FAStT Multi-Path Disk Device type.

Figure 6-78 IBM 1726-4xx FAStT Multi-Path Disk Device type

3. Open your Disk Management window; the disks appeared, as shown in Figure 6-79 on
page 294. You might need to reactivate your disk by right-clicking each disk.

Chapter 6. Data migration 293


Figure 6-79 Windows Server 2008 Disk Management

6.6 Migrating Linux SAN disks to SVC disks


In this section, we move the two LUNs from a Linux server that is booting directly off our
DS4000 storage subsystem over to the SVC. We then manage those LUNs with the SVC and
move them between other managed disks. Finally, we move them back to image mode disks
so that those LUNs can be masked and mapped back to the Linux server directly.

This example can help you to perform any of the following tasks in your environment:
򐂰 Move a Linux server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC.
Perform this task first when you are introducing the SVC into your environment. This
section shows that your host downtime is only a few minutes while you remap and remask
disks by using your storage subsystem LUN management tool. For more information
about this task, see 6.6.2, “Preparing your IBM SAN Volume Controller to virtualize disks”
on page 297.
򐂰 Move data between storage subsystems while your Linux server is still running and
servicing your business application.
Perform this task if you are removing a storage subsystem from your SAN environment.
You also can perform this task if you want to move the data onto LUNs that are more
appropriate for the type of data that is stored on those LUNs, taking availability,
performance, and redundancy into consideration. For more information about this task,
see 6.6.4, “Migrating the image mode volumes to managed MDisks” on page 304.
򐂰 Move your Linux server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the Linux server.
For more information about this step, see 6.6.5, “Preparing to migrate from the IBM SAN
Volume Controller” on page 307.

294 Implementing the IBM System Storage SAN Volume Controller V7.4
You can use these three activities individually or together to migrate your Linux server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three tasks, you can introduce or remove the SVC from
your environment.

The only downtime that is required for these tasks is the time that it takes to remask and
remap the LUNs between the storage subsystems and your SVC.

Our Linux environment is shown in Figure 6-80.

Zoning for Migration Scenarios

LINUX
Host

SAN

Green Zone
IBM or OEM
Storage
Subsystem

Figure 6-80 Linux SAN environment

Figure 6-80 shows our Linux server that is connected to our SAN infrastructure. The following
LUNs are masked directly to our Linux server from our storage subsystem:
򐂰 The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise
Linux V5.1). This LUN is used to boot the system directly from the storage subsystem. The
operating system identifies this LUN as /dev/mapper/VolGroup00-LogVol00.

SCSI LUN ID 0: To successfully boot a host off the SAN, you must assign the LUN as
SCSI LUN ID 0.

Linux sees this LUN as our /dev/sda disk.


򐂰 We also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB and mounted in the
/data folder on the /dev/dm-2 disk.

Example 6-11 on page 296 shows our disks that attach directly to the Linux hosts.

Chapter 6. Data migration 295


Example 6-11 Directly attached disks
[root@Palau data]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1971344 7601400 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data
[root@Palau data]#

Our Linux server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-80 on page 295. The
Linux server has the following configuration:
򐂰 The Linux server’s host bus adapter (HBA) cards are zoned so that they are in the Green
Zone with our storage subsystem.
򐂰 The two LUNs that were defined on the storage subsystem by using LUN masking are
directly available to our Linux server.

6.6.1 Connecting the IBM SAN Volume Controller to your SAN fabric
This section describes the steps to introduce the SVC into your SAN environment. Although
this section summarizes these activities only, you can introduce the SVC into your SAN
environment without any downtime to any host or application that also uses your SAN.

If an SVC is already connected, skip to 6.6.2, “Preparing your IBM SAN Volume Controller to
virtualize disks” on page 297.

Complete the following steps to connect the SVC to your SAN fabric:
1. Assemble your SVC components (nodes, uninterruptible power supply units, and
redundant ac-power switches). Cable the SVC correctly, power on the SVC, and verify that
the SVC is visible on your SAN. For more information, see Chapter 3, “Planning and
configuration” on page 73.
2. Create and configure your SVC system.
3. Create the following zones:
– An SVC node zone, which is our Black Zone that is described in Figure 6-81 on
page 297
– A storage zone (our Red Zone)
– A host zone (our Blue Zone)

296 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-81 shows our environment.

Zoning per Migration Scenarios

LINUX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-81 SAN environment with an attached SVC

6.6.2 Preparing your IBM SAN Volume Controller to virtualize disks


This section describes the preparation tasks that we performed before our Linux server was
taken offline. These activities are all nondisruptive. They do not affect your SAN fabric or your
existing SVC configuration (if you have a production SVC).

Creating a storage pool


When we move the two Linux LUNs to the SVC, we use them initially in image mode.
Therefore, we need a storage pool to hold those disks.

We must create an empty storage pool for each of the disks by using the commands that are
shown in Example 6-12.

Example 6-12 Create an empty storage pool


IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkmdiskgrp -name Palau_Pool1 -ext 512
MDisk Group, id [2], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
2 Palau_Pool1 online 0 0 0 512 0
0.00MB 0.00MB 0.00MB 0 0 auto
inactive

Chapter 6. Data migration 297


3 Palau_Pool2 online 0 0 0 512 0
0.00MB 0.00MB 0.00MB 0 0 auto
inactive
IBM_2145:ITSO-CLS1:ITSO_admin>

Creating your host definition


If you prepared your zones correctly, the SVC can see the Linux server’s HBAs on the fabric.
(Our host had only one HBA.)

The use of the svcinfo lshbaportcandidate command on the SVC lists all of the worldwide
names (WWNs), which are not yet allocated to a host, that the SVC can see on the SAN
fabric. Example 6-13 shows the output of the nodes that it found on our SAN fabric. (If the
port did not show up, a zone configuration problem exists.)

Example 6-13 Display HBA port candidates

IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2
IBM_2145:ITSO-CLS1:ITSO_admin>

If you do not know the WWN of your Linux server, you can review which WWNs are currently
configured on your storage subsystem for this host. Figure 6-82 shows our configured ports
on an IBM DS4700 storage subsystem.

Figure 6-82 Display port WWNs

298 Implementing the IBM System Storage SAN Volume Controller V7.4
After it is verified that the SVC can see our host (Palau), we create the host entry and assign
the WWN to this entry. Example 6-14 shows these commands.

Example 6-14 Create the host entry


IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkhost -name Palau -hbawwpn
210000E08B054CAA:210000E08B89C1CD
Host, id [0], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshost Palau
id 0
name Palau
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B89C1CD
node_logged_in_count 4
state inactive
WWPN 210000E08B054CAA
node_logged_in_count 4
state inactive
IBM_2145:ITSO-CLS1:ITSO_admin>

Verifying that we can see our storage subsystem


If we set up our zoning correctly, the SVC can see the storage subsystem by using the
svcinfo lscontroller command, as shown in Example 6-15.

Example 6-15 Discover the storage controller


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814 FAStT
IBM_2145:ITSO-CLS1:ITSO_admin>

You can rename the storage subsystem to a more meaningful name by using the svctask
chcontroller -name command. If you have multiple storage subsystems that connect to your
SAN fabric, renaming the storage subsystems makes it considerably easier to identify them.

Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (if the SVC sees many available, unmanaged MDisks), we get the LUN serial
numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are
shown in Figure 6-83 on page 300 (which shows the disk serial number SAN_Boot_palau)
and Figure 6-84 on page 300.

Chapter 6. Data migration 299


Figure 6-83 Obtaining the disk serial number: SAN_Boot_palau

Figure 6-84 shows the disk serial number Palau_data.

Figure 6-84 Obtaining the disk serial number: Palau_data

300 Implementing the IBM System Storage SAN Volume Controller V7.4
Before we move the LUNs to the SVC, we must configure the host multipath configuration for
the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-16, and
then add the content of Example 6-17 to the file.

Example 6-16 Edit the multipath.conf file


[root@Palau ~]# vi /etc/multipath.conf
[root@Palau ~]# service multipathd stop
Stopping multipathd daemon: [ OK ]
[root@Palau ~]# service multipathd start
Starting multipathd daemon: [ OK ]
[root@Palau ~]#

Example 6-17 Data to add to the multipath.conf file

# SVC
device {
vendor "IBM"
product "2145DH8"
path_grouping_policy group_by_serial
}

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as volumes.

6.6.3 Moving the LUNs to the IBM SAN Volume Controller


In this step, we move the LUNs that are assigned to the Linux server and reassign them to the
SVC. Our Linux server has two LUNs: one LUN is for our boot disk and operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.

If we wanted to move only the LUN that holds our application and data files, we do not have to
reboot the host. The only requirement is that we unmount the file system and vary off the
volume group (VG) to ensure the data integrity between the reassignment.

Because we intend to move both LUNs at the same time, we must complete the following
steps:
1. Confirm that the multipath.conf file is configured for the SVC.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps instead:
a. Stop the applications that use the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are a Logical Volume Manager (LVM) volume, deactivate that VG by
using the vgchange -a n VOLUMEGROUP_NAME command.
d. If possible, also unload your HBA driver by using the rmmod DRIVER_MODULE command.
This command removes the SCSI definitions from the kernel. (We reload this module
and rediscover the disks later.) It is possible to tell the Linux SCSI subsystem to rescan
for new disks without requiring you to unload the HBA driver; however, we do not
provide those details here.

Chapter 6. Data migration 301


3. By using Storage Manager (our storage subsystem management tool), we can unmap and
unmask the disks from the Linux server and remap and remask the disks to the SVC.

LUN IDs: Although we are using boot from SAN, you can also map the boot disk with
any LUN to the SVC. The LUN does not have to be 0 until later when we configure the
mapping in the SVC to the host.

4. From the SVC, discover the new disks by using the svctask detectmdisk command. The
disks are discovered and named mdiskN, where N is the next available MDisk number
(starting from 0). Example 6-18 shows the commands that we used to discover our
MDisks and to verify that we have the correct MDisks.

Example 6-18 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 mdisk26 online unmanaged 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 mdisk27 online unmanaged 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>

Important: Match your discovered MDisk serial numbers (unique identifier (UID) on the
svcinfo lsmdisk task display) with the serial number that you recorded earlier (in
Figure 6-83 on page 300 and Figure 6-84 on page 300).

5. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-19.

Example 6-19 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name md_palauS mdisk26
IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name md_palauD mdisk27
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
26 md_palauS online unmanaged 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online unmanaged 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>

6. We create our image mode volumes by using the svctask mkvdisk command and the
-vtype image option, as shown in Example 6-20 on page 303. This command virtualizes
the disks in the same layout as though they were not virtualized.

302 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 6-20 Create the image mode volumes
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdisk -mdiskgrp Palau_Pool1 -iogrp 0
-vtype image -mdisk md_palauS -name palau_SANB
Virtual Disk, id [29], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdisk -mdiskgrp Palau_Pool2 -iogrp 0
-vtype image -mdisk md_palauD -name palau_Data
Virtual Disk, id [30], successfully create

IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 md_palauS online image 2 Palau_Pool1 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online image 3 Palau_Pool2 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>

IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_wri te_state se_copy_count
29 palau_SANB 0 io_grp0 online 4
Palau_Pool1 12.0GB image
60050768018301BF280000000000002B 0 1 empty
0
30 palau_Data 0 io_grp0 online 4
Palau_Pool2 5.0GB image
60050768018301BF280000000000002C 0 1 empty
0

7. Map the new image mode volumes to the host, as shown in Example 6-21.

Important: Ensure that you map the boot volume with SCSI ID 0 to your host. The host
must identify the boot volume during the boot process.

Example 6-21 Map the volumes to the host


IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdiskhostmap -host Palau -scsi 0
palau_SANB
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdiskhostmap -host Palau -scsi 1
palau_Data
Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshostvdiskmap Palau
id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
0 Palau 0 29 palau_SANB
210000E08B89C1CD 60050768018301BF280000000000002B

Chapter 6. Data migration 303


0 Palau 1 30 palau_Data
210000E08B89C1CD 60050768018301BF280000000000002C
IBM_2145:ITSO-CLS1:ITSO_admin>

FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.

8. Power on your host server and enter your FC HBA BIOS before booting the operating
system. Ensure that you change the boot configuration so that it points to the SVC.
Complete the following steps on a QLogic HBA:
a. Press Ctrl+Q to enter the HBA BIOS.
b. Open Configuration Settings.
c. Open Selectable Boot Settings.
d. Change the entry from your storage subsystem to the IBM SAN Volume Controller
2145 LUN with SCSI ID 0.
e. Exit the menu and save your changes.
9. Boot up your Linux operating system.
If you moved only the application LUN to the SVC and left your Linux server running, you
must complete only these steps to see the new volume:
a. Load your HBA driver by using the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, you can run commands to the kernel to rescan the
SCSI bus to see the new volumes. (These details are beyond the scope of this book.)
b. Check your syslog to verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, rediscover the VG and then run the
vgchange -a y VOLUME_GROUP command to activate the VG.
10.Mount your file systems by using the mount /MOUNT_POINT command, as shown in
Example 6-22. The df output shows us that all of the disks are available again.

Example 6-22 Mount data disk


[root@Palau data]# mount /dev/dm-2 /data
[root@Palau data]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1938056 7634688 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data
[root@Palau data]#

11.You are now ready to start your application.

6.6.4 Migrating the image mode volumes to managed MDisks


While the Linux server is still running and our file systems are in use, we migrate the image
mode volumes onto striped volumes, spreading the extents over the other three MDisks. In
our example, the three new LUNs are on a DS4500 storage subsystem, so we also move to
another storage subsystem in this example.

304 Implementing the IBM System Storage SAN Volume Controller V7.4
Preparing MDisks for striped mode volumes
From our second storage subsystem, we performed the following tasks:
򐂰 Created and allocated three new LUNs to the SVC
򐂰 Discovered them as MDisks
򐂰 Renamed these LUNs to more meaningful names
򐂰 Created a storage pool
򐂰 Placed all of these MDisks into this storage pool

The output of our commands is shown in Example 6-23.

Example 6-23 Create a storage pool


IBM_2145:ITSO-CLS1:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkmdiskgrp -name MD_palauVD -ext 512
MDisk Group, id [8], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 md_palauS online image 2 Palau_Pool1 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online image 3 Palau_Pool2 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
28 mdisk28 online unmanaged 8.0GB
0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd
29 mdisk29 online unmanaged 8.0GB
0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd
30 mdisk30 online unmanaged 8.0GB
0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd

IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name palau-md1 mdisk28


IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name palau-md2 mdisk29
IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name palau-md3 mdisk30
IBM_2145:ITSO-CLS1:ITSO_admin>svctask addmdisk -mdisk palau-md1 MD_palauVD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask addmdisk -mdisk palau-md2 MD_palauVD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask addmdisk -mdisk palau-md3 MD_palauVD
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 md_palauS online image 2 Palau_Pool1 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online image 3 Palau_Pool2 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
28 palau-md1 online unmanaged 8 MD_palauVD 8.0GB
0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd

Chapter 6. Data migration 305


29 palau-md2 online unmanaged 8 MD_palauVD 8.0GB
0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd
30 palau-md3 online unmanaged 8 MD_palauVD 8.0GB
0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>

Migrating the volumes


We are now ready to migrate the image mode volumes onto striped volumes in the
MD_palauVD storage pool by using the svctask migratevdisk command, as shown in
Example 6-24.

Example 6-24 Migrating image mode volumes to striped volumes


IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp
MD_palauVD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp
MD_palauVD
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 25
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 70
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin>

While the migration is running, our Linux server is still running.

To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-24. Listing the storage pool by using the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pools is slowly increasing while those extents
are moved to the new storage pool.

After this task completes, the volumes are now spread over three MDisks, as shown in
Example 6-25.

Example 6-25 Migration complete


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdiskgrp MD_palauVD
id 8
name MD_palauVD
status online
mdisk_count 3
vdisk_count 2
capacity 24.0GB
extent_size 512
free_capacity 7.0GB
virtual_capacity 17.00GB

306 Implementing the IBM System Storage SAN Volume Controller V7.4
used_capacity 17.00GB
real_capacity 17.00GB
overallocation 70
warning 0
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdiskmember palau_SANB
id
28
29
30
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdiskmember palau_Data
id
28
29
30
IBM_2145:ITSO-CLS1:ITSO_admin>

Our migration to striped volumes on another storage subsystem (DS4500) is now complete.
The original MDisks (palau-md1, palau-md2, and palau-md3) can now be removed from the
SVC, and these LUNs can be removed from the storage subsystem.

If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can
remove the DS4700 storage subsystem from our SAN fabric.

6.6.5 Preparing to migrate from the IBM SAN Volume Controller


Before we move the Linux server’s LUNs from being accessed by the SVC as volumes to
being directly accessed from the storage subsystem, we must convert the volumes into image
mode volumes.

You might want to perform this task for any one of the following reasons:
򐂰 You purchased a new storage subsystem and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
򐂰 You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
򐂰 Changes to your environment no longer require this host to use the SVC.

We can perform other preparation tasks before we must shut down the host and reconfigure
the LUN masking and mapping. We describe these tasks in this section.

If you are moving the data to a new storage subsystem, it is assumed that the storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-85 on
page 308.

Chapter 6. Data migration 307


Zoning for Migration Scenarios

LINUX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-85 Environment with the SVC

Making fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. You must
add the new storage subsystem to the Red Zone so that the SVC can communicate with it
directly.

We also need a Green Zone for our host to use when we are ready for it to directly access the
disk after it is removed from the SVC.

It is assumed that you created the necessary zones, and after your zone configuration is set
up correctly, the SVC sees the new storage subsystem controller by using the svcinfo
lscontroller command, as shown in Example 6-26.

Example 6-26 Check controller name


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
product_id_high
0 controller0 IBM 1814
FAStT
IBM_2145:ITSO-CLS1:ITSO_admin>

308 Implementing the IBM System Storage SAN Volume Controller V7.4
It is also a good idea to rename the new storage subsystem’s controller to a more useful
name, which can be done by using the svctask chcontroller -name command, as shown in
Example 6-27.

Example 6-27 Rename controller


IBM_2145:ITSO-CLS1:ITSO_admin>svctask chcontroller -name ITSO-4700 0
IBM_2145:ITSO-CLS1:ITSO_admin>

Also, verify that controller name was changed as you wanted, as shown in Example 6-28.

Example 6-28 Recheck controller name


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
product_id_high
0 ITSO-4700 IBM 1814
FAStT
IBM_2145:ITSO-CLS1:ITSO_admin>

Creating LUNs
We created two LUNs and masked the LUNs on our storage subsystem so that the SVC can
see them. Eventually, we give these two LUNs directly to the host and remove the volumes
that the host features. To check that the SVC can use these two LUNs, run the svctask
detectmdisk command, as shown in Example 6-29.

Example 6-29 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 mdisk0 online managed
600a0b800026b282000042f84873c7e100000000000000000000000000000000
28 palau-md1 online managed 8
MD_palauVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 palau-md2 online managed 8
MD_palauVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 palau-md3 online managed 8
MD_palauVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdisk31 online unmanaged
6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdisk32 online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>

Even though the MDisks do not stay in the SVC for long, we suggest that you rename them to
more meaningful names so that they are not confused with other MDisks that are used by
other activities.

Chapter 6. Data migration 309


Also, we create the storage pools to hold our new MDisks, as shown in Example 6-30.

Example 6-30 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name mdpalau_ivd mdisk32
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512
MDisk Group, id [9], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512
CMMVC5758E Object name already exists.
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning easy_tier easy_tier_status
8 MD_palauVD online 3 2
24.0GB 512 7.0GB 17.00GB 17.00GB
17.00GB 70 0 auto inactive
9 MDG_Palauivd online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0 auto inactive
IBM_2145:ITSO-CLS1:ITSO_admin>

Our SVC environment is now ready for the volume migration to image mode volumes.

6.6.6 Migrating the volumes to image mode volumes


While our Linux server is still running, we migrate the managed volumes onto the new MDisks
by using image mode volumes. Use the svctask migratetoimage command to perform this
task, as shown in Example 6-31.

Example 6-31 Migrate the volumes to image mode volumes


IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratetoimage -vdisk palau_SANB -mdisk
mdpalau_ivd -mdiskgrp MD_palauVD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratetoimage -vdisk palau_Data -mdisk
mdpalau_ivd1 -mdiskgrp MD_palauVD
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
28 palau-md1 online managed 8
MD_palauVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 palau-md2 online managed 8
MD_palauVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 palau-md3 online managed 8
MD_palauVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdpalau_ivd1 online image 8
MD_palauVD 6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdpalau_ivd online image 8
MD_palauVD 12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmigrate

310 Implementing the IBM System Storage SAN Volume Controller V7.4
migrate_type Migrate_to_Image
progress 4
migrate_source_vdisk_index 29
migrate_target_mdisk_index 32
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 30
migrate_source_vdisk_index 30
migrate_target_mdisk_index 31
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin>

During the migration, our Linux server is unaware that its data is being physically moved
between storage subsystems.

After the migration completes, the image mode volumes are ready to be removed from the
Linux server. Also, the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.

6.6.7 Removing the LUNs from the IBM SAN Volume Controller
The next step requires downtime on the Linux server because we remap and remask the
disks so that the host sees them directly through the Green Zone, as shown in Figure 6-85 on
page 308.

Our Linux server has two LUNs: one LUN is our boot disk and holds operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.

If we want to move only the LUN that holds our application and data files, we can move that
LUN without rebooting the host. The only requirement is that we unmount the file system and
vary off the VG to ensure data integrity during the reassignment.

Before you start: Moving LUNs to another storage subsystem might need another entry in
the multipath.conf file. Check with the storage subsystem vendor to identify any content
that you must add to the file. You might be able to install and modify the file in advance.

Complete the following steps to move both LUNs at the same time:
1. Confirm that your operating system is configured for the new storage.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.

Chapter 6. Data migration 311


c. If the file systems are an LVM volume, deactivate that VG by using the vgchange -a n
VOLUMEGROUP_NAME command.
d. If you can, unload your HBA driver by using the rmmod DRIVER_MODULE command. This
command removes the SCSI definitions from the kernel. (We reload this module and
rediscover the disks later.) It is possible to tell the Linux SCSI subsystem to rescan for
new disks without requiring you to unload the HBA driver; however, we do not provide
those details here.
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command
(Example 6-32). To confirm that you removed the volumes, use the svcinfo
lshostvdiskmap command, which shows that these disks are no longer mapped to the
Linux server.

Example 6-32 Remove the volumes from the host


IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdiskhostmap -host Palau palau_SANB
IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdiskhostmap -host Palau palau_Data
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshostvdiskmap Palau
IBM_2145:ITSO-CLS1:ITSO_admin>

4. Remove the volumes from the SVC by using the svctask rmvdisk command. This step
makes them unmanaged, as shown in Example 6-33.

Cached data: When you run the svctask rmvdisk command, the SVC first confirms
that no outstanding dirty cached data exists for the volume that is being removed. If
cached data is still uncommitted, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk

You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.

The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data needs to be destaged and how busy the
I/O subsystem is determine how long this command takes to complete.

You can check whether the volume has uncommitted data in the cache by using the
command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute.
This attribute has the following meanings:
򐂰 empty: No modified data exists in the cache.
򐂰 not_empty: Modified data might exist in the cache.
򐂰 corrupt: Modified data might exist in the cache, but any data was lost.

Example 6-33 Remove the volumes from the SVC


IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdisk palau_SANB
IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdisk palau_Data
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
31 mdpalau_ivd1 online unmanaged
6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000

312 Implementing the IBM System Storage SAN Volume Controller V7.4
32 mdpalau_ivd online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>

5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the Linux server.

Important: If one of the disks is used to boot your Linux server, you must ensure that
the disk is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds
that disk during its initialization.

6. Power on your host server and enter your FC HBA BIOS before you boot the OS. Ensure
that you change the boot configuration so that it points to the SVC. In our example, we
performed the following steps on a QLogic HBA:
a. Pressed Ctrl+Q to enter the HBA BIOS
b. Opened Configuration Settings
c. Opened Selectable Boot Settings
d. Changed the entry from the SVC to the storage subsystem LUN with SCSI ID 0
e. Exited the menu and saved the changes

Important: This step is the last step that you can perform and still safely back out from
the changes so far.

Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
򐂰 Remap and remask the LUNs back to the SVC.
򐂰 Run the svctask detectmdisk command to rediscover the MDisks.
򐂰 Re-create the volumes with the svctask mkvdisk command.
򐂰 Remap the volumes back to the server with the svctask mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data
loss.

7. Restart the Linux server.


If all of the zoning, LUN masking, and mapping were successful, the Linux server boots as
though nothing happened.
However, if you moved only the application LUN to the SVC and left your Linux server
running, you must complete the following steps to see the new volume:
a. Load your HBA driver by using the modprobe DRIVER_NAME command.
If you did not (and cannot) unload your HBA driver, you can run commands to the
kernel to rescan the SCSI bus to see the new volumes. (Details for this step are beyond
the scope of this book.)
b. Check your syslog and verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, run the vgscan command to
rediscover the VG. Then, run the vgchange -a y VOLUME_GROUP command to activate
the VG.

Chapter 6. Data migration 313


8. Mount your file systems by using the mount /MOUNT_POINT command, as shown in
Example 6-34. The df output shows that all of the disks are available again.

Example 6-34 File system after migration


[root@Palau ~]# mount /dev/dm-2 /data
[root@Palau ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1938124 7634620 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data
[root@Palau ~]#

You are ready to start your application.


9. To ensure that the MDisks are removed from the SVC, run the svctask detectmdisk
command. The MDisks first are discovered as offline. Then, they are automatically
removed when the SVC determines that no volumes are associated with these MDisks.

6.7 Migrating ESX SAN disks to SVC disks


In this section, we describe how to move the two LUNs from our VMware ESX server to the
SVC. The ESX operating system is installed locally on the host, but the two SAN disks are
connected and the VMs are stored there.

We then manage those LUNs with the SVC, move them between other managed disks, and
finally move them back to image mode disks so that those LUNs can then be masked and
mapped back to the VMware ESX server directly.

This example can help you perform any one of the following tasks in your environment:
򐂰 Move your ESX server’s data LUNs (that are your VMware VMFS file systems where you
might have your VMs stored), which are directly accessed from a storage subsystem, to
virtualized disks under the control of the SVC.
򐂰 Move LUNs between storage subsystems while your VMware VMs are still running.
You can perform this task to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. For more information, see 6.7.4, “Migrating the image mode volumes” on
page 323.
򐂰 Move your VMware ESX server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the server.
This task starts in 6.7.5, “Preparing to migrate from the IBM SAN Volume Controller” on
page 326.

You can use these tasks individually or together to migrate your VMware ESX server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three of these tasks, you can introduce the SVC in your
environment or move the data between your storage subsystems.

The only downtime that is required for these tasks is the time that it takes you to remask and
remap the LUNs between the storage subsystems and your SVC.

314 Implementing the IBM System Storage SAN Volume Controller V7.4
Our starting SAN environment is shown in Figure 6-86.

Zoning per Migration Scenarios

ESX 3.X Host

SLES W2k3

SAN

Green Zone
IBM or OEM
Storage
Subsystem

Figure 6-86 ESX environment before migration

Figure 6-86 shows our ESX server that is connected to the SAN infrastructure. Two LUNs are
masked directly to it from our storage subsystem.

Our ESX server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-86.

The ESX server’s HBA cards are zoned so that they are in the Green Zone with our storage
subsystem.

The two LUNs that were defined on the storage subsystem and that use LUN masking are
directly available to our ESX server.

6.7.1 Connecting the IBM SAN Volume Controller to your SAN fabric
This section describes the process that is used to introduce the SVC into your SAN
environment. Although we summarize only the steps that are in the process, you can
introduce the SVC into your SAN environment without any downtime to any host or
application that also uses your SAN.

If an SVC is already connected, skip to the instructions that are given in 6.7.2, “Preparing your
IBM SAN Volume Controller to virtualize disks” on page 316.

Chapter 6. Data migration 315


Important: Be careful when you are connecting the SVC to your SAN because this task
requires you to connect cables to your SAN switches and to alter your switch zone
configuration. Performing these tasks incorrectly can render your SAN inoperable, so
ensure that you fully understand the effect of your actions.

Complete the following steps to connect the SVC to your SAN fabric:
1. Assemble your SVC components (nodes, uninterruptible power supply unit, and redundant
ac-power switches). Cable the SVC correctly and power on the SVC. Verify that the SVC is
visible on your SAN.
2. Create and configure your SVC system.
3. Create the following zones:
– An SVC node zone (the Black Zone as shown in our diagram on Example 6-57 on
page 337)
– A storage zone (our Red Zone)
– A host zone (our Blue Zone)

Figure 6-87 shows our example environment.

Zoning for Migration Scenarios

ESX 3.X Host


Linux W2K

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-87 SAN environment with the SVC attached

6.7.2 Preparing your IBM SAN Volume Controller to virtualize disks


This section describes the preparatory tasks that we perform before taking our ESX server or
VMs offline. These tasks are all nondisruptive activities, which do not affect your SAN fabric or
your existing SVC configuration (if you have a production SVC in place).

316 Implementing the IBM System Storage SAN Volume Controller V7.4
Creating a storage pool
When we move the two ESX LUNs to the SVC, they first are used in image mode; therefore,
we need a storage pool to hold those disks.

We create an empty storage pool for these disks by using the command that is shown in
Example 6-35. Our MDG_Nile_VM storage pool holds the boot LUN and our data LUN.

Example 6-35 Creating an empty storage pool


IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512
MDisk Group, id [3], successfully created

Creating the host definition


If you prepared the zones correctly, the SVC can see the ESX server’s HBAs on the fabric.
(Our host had only one HBA.)

First, we get the WWN for our ESX server’s HBA because many hosts are connected to our
SAN fabric and in the Blue Zone. We want to ensure that we have the correct WWN to reduce
our ESX server’s downtime.

Log in to your VMware Management Console as root, browse to Configuration, and select
Storage Adapters. The storage adapters are shown on the right side of the window that is
shown in Figure 6-88. This window displays all of the necessary information. Figure 6-88
shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.

Figure 6-88 Obtain your WWN by using the VMware Management Console

Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs that are
not yet allocated to a host and that the SVC can see on the SAN fabric. Example 6-36 on
page 318 shows the output of the host WWNs that it found on our SAN fabric. (If the port is
not shown, a zone configuration problem exists.)

Chapter 6. Data migration 317


Example 6-36 Available host WWNs
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshbaportcandidate
id
210000E08B89B8C0
210000E08B892BCD
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2
IBM_2145:ITSO-CLS1:ITSO_admin>

After we verify that the SVC can see our host, we create the host entry and assign the WWN
to this entry, as shown in Example 6-37.

Example 6-37 Create the host entry


IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkhost -name Nile -hbawwpn
210000E08B89B8C0:210000E08B892BCD
Host, id [1], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active
IBM_2145:ITSO-CLS1:ITSO_admin>

Verifying that you can see your storage subsystem


If our zoning was performed correctly, the SVC can also see the storage subsystem with the
svcinfo lscontroller command, as shown in Example 6-38.

Example 6-38 Available storage controllers


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id
product_id_low product_id_high
0 DS4500 IBM
1742-900
1 DS4700 IBM
1814 FAStT

Getting your disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (if the SVC sees many available, unmanaged MDisks), we get the LUN serial
numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.

318 Implementing the IBM System Storage SAN Volume Controller V7.4
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Figure 6-89 and
Figure 6-90 show our serial numbers. Figure 6-89 shows disk serial number VM_W2k3.

Figure 6-89 Obtaining disk serial number VM_W2k3

Figure 6-90 shows disk serial number VM_SLES.

Figure 6-90 Obtaining disk serial number VM_SLES

We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and
give them back to the host as volumes.

Chapter 6. Data migration 319


6.7.3 Moving the LUNs to the IBM SAN Volume Controller
In this step, we move the LUNs that are assigned to the ESX server and reassign them to the
SVC.

Our ESX server has two LUNs, as shown in Figure 6-91.

Figure 6-91 VMware LUNs

The VMs are on these LUNs. Therefore, to move these LUNs under the control of the SVC,
we do not need to reboot the entire ESX server. However, we must stop and suspend all
VMware guests that are using these LUNs.

Moving VMware guest LUNs


To move the VMware LUNs to the SVC, complete the following steps:
1. By using Storage Manager, we identified the LUN number that was presented to the ESX
server. Record which LUN had which LUN number (Figure 6-92).

Figure 6-92 Identify LUN numbers in IBM DS4000 Storage Manager

2. Identify all of the VMware guests that are using this LUN and shut them down. One way to
identify them is to highlight the VM and open the Summary tab. The datapool that is used
is displayed under Datastore. Figure 6-93 on page 321 shows a Linux VM that is using the
datastore that is named SLES_Costa_Rica.

320 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-93 Identify the LUNs that are used by the VMs

3. If you have several ESX hosts, also check the other ESX hosts to ensure that no guest
operating system is running and using this datastore.
4. Repeat steps 1 - 3 for every datastore that you want to migrate.
5. After the guests are suspended, we use Storage Manager (our storage subsystem
management tool) to unmap and unmask the disks from the ESX server and to remap and
remask the disks to the SVC.
6. From the SVC, discover the new disks by using the svctask detectmdisk command. The
disks are discovered and named as mdiskN, where N is the next available MDisk number
(starting from 0). Example 6-39 shows the commands that we used to discover our
MDisks and to verify that we have the correct MDisks.

Example 6-39 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
21 mdisk21 online unmanaged
60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 mdisk22 online unmanaged
70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>

Chapter 6. Data migration 321


Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
command task display) with the serial number that you obtained earlier, as shown in
Figure 6-89 on page 319 and Figure 6-90 on page 319.

7. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-40.

Example 6-40 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name ESX_W2k3 mdisk22
IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name ESX_SLES mdisk21
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk

21 ESX_SLES online unmanaged


60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online unmanaged
70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>

8. We create our image mode volumes by using the svctask mkvdisk command
(Example 6-41). The use of the -vtype image parameter ensures that it creates image
mode volumes, which means that the virtualized disks have the same layout as though
they were not virtualized.

Example 6-41 Create the image mode volumes


IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0
-vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVD
Virtual Disk, id [29], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0
-vtype image -mdisk ESX_SLES -name ESX_SLES_IVD
Virtual Disk, id [30], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>

9. We can map the new image mode volumes to the host. Use the same SCSI LUN IDs as
on the storage subsystem for the mapping, as shown in Example 6-42.

Example 6-42 Map the volumes to the host


IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdiskhostmap -host Nile -scsi 0
ESX_SLES_IVD
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkvdiskhostmap -host Nile -scsi 1
ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD
60050768018301BF280000000000002A
1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD
60050768018301BF2800000000000029

322 Implementing the IBM System Storage SAN Volume Controller V7.4
10.By using the VMware Management Console, rescan to discover the new volume. Open
the Configuration tab, select Storage Adapters, and then click Rescan. During the
rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with the new vmhba devices.
11.We are ready to restart the VMware guests again.

At this point, you migrated the VMware LUNs successfully to the SVC.

6.7.4 Migrating the image mode volumes


While the VMware server and its VMs are still running, we migrate the image mode volumes
onto striped volumes, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode volumes


In this example, we migrate the image mode volumes to volumes and move the data to
another storage subsystem in one step.

Adding a storage subsystem to the IBM SAN Volume Controller


If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-94.

Zoning per Migration Scenarios

ESX 3.X Host


LINUX
Linux W2K Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-94 ESX SVC SAN environment

Make fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the
new storage subsystem to the Red Zone so that the SVC can communicate with it directly.

Chapter 6. Data migration 323


We also need a Green Zone for our host to use when we are ready for it to directly access the
disk after it is removed from the SVC.

We assume that you created the necessary zones.

In our environment, we performed the following tasks:


򐂰 Created three LUNs on another storage subsystem and mapped it to the SVC
򐂰 Discovered the LUNs as MDisks
򐂰 Created a storage pool
򐂰 Renamed these LUNs to more meaningful names
򐂰 Put all these MDisks into this storage pool

You can see the output of the commands in Example 6-43.

Example 6-43 Create a storage pool


IBM_2145:ITSO-CLS1:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
21 ESX_SLES online image 3
MDG_Nile_VM 60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online image 3
MDG_Nile_VM 70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
23 mdisk23 online unmanaged
55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 mdisk24 online unmanaged
55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 mdisk25 online unmanaged
55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512
IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name IBMESX-MD1 mdisk23
IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name IBMESX-MD2 mdisk24
IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name IBMESX-MD3 mdisk25
IBM_2145:ITSO-CLS1:ITSO_admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
21 ESX_SLES online image 3
MDG_Nile_VM 60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online image 3
MDG_Nile_VM 70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000

324 Implementing the IBM System Storage SAN Volume Controller V7.4
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>

Migrating the volumes


We are now ready to migrate the image mode volumes onto striped volumes in the new
storage pool (MDG_ESX_VD) by using the svctask migratevdisk command, as shown in
Example 6-44. While the migration is running, our VMware ESX server and our VMware
guests continue to run.

To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-44. Listing the storage pool with the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pool is slowly increasing as those extents are
moved to the new storage pool.

Example 6-44 Migrating image mode volumes to striped volumes


IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp
MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp
MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 1
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning

Chapter 6. Data migration 325


3 MDG_Nile_VM online 2 2
130.0GB 512 1.0GB 130.00GB 130.00GB
130.00GB 100 0
4 MDG_ESX_VD online 3 0
165.0GB 512 35.0GB 0.00MB 0.00MB
0.00MB 0 0
IBM_2145:ITSO-CLS1:ITSO_admin>

If you compare the svcinfo lsmdiskgrp output after the migration (as shown in
Example 6-45), you can see that all of the virtual capacity was moved from the old storage
pool (MDG_Nile_VM) to the new storage pool (MDG_ESX_VD). The mdisk_count column
shows that the capacity is now spread over three MDisks.

Example 6-45 List MDisk group


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 MDG_Nile_VM online 2 0
130.0GB 512 130.0GB 0.00MB 0.00MB
0.00MB 0 0
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0
IBM_2145:ITSO-CLS1:ITSO_admin>

The migration to the SVC is complete. You can remove the original MDisks from the SVC and
remove these LUNs from the storage subsystem.

If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it
from our SAN fabric.

6.7.5 Preparing to migrate from the IBM SAN Volume Controller


Before we move the ESX server’s LUNs from being accessible by the SVC as volumes to
becoming directly accessed from the storage subsystem, we must convert the volumes into
image mode volumes.

You might want to perform this process for any one of the following reasons:
򐂰 You purchased a new storage subsystem and you were using the SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
򐂰 You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
򐂰 Changes to your environment no longer require this host to use the SVC.

We can perform other preparatory activities before we shut down the host and reconfigure the
LUN masking and mapping. This section describes those activities. In our example, we move
volumes that are on a DS4500 to image mode volumes that are on a DS4700.

326 Implementing the IBM System Storage SAN Volume Controller V7.4
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as described in “Adding a storage
subsystem to the IBM SAN Volume Controller” on page 323 and “Make fabric zone changes”
on page 323.

Creating LUNs
On our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see
them. These two LUNs eventually are given directly to the host, which removes the volumes
that it uses. To check that the SVC can use them, run the svctask detectmdisk command, as
shown in Example 6-46.

Example 6-46 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID

23 IBMESX-MD1 online managed 4


MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
26 mdisk26 online unmanaged
120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 mdisk27 online unmanaged
100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000

Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are being used by
other activities. We also create the storage pools to hold our new MDisks, as shown in
Example 6-47.

Example 6-47 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name ESX_IVD_SLES mdisk26
IBM_2145:ITSO-CLS1:ITSO_admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27
IBM_2145:ITSO-CLS1:ITSO_admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512
MDisk Group, id [5], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0

Chapter 6. Data migration 327


5 MDG_IVD_ESX online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
IBM_2145:ITSO-CLS1:ITSO_admin>

Our SVC environment is ready for the volume migration to image mode volumes.

6.7.6 Migrating the managed volumes to image mode volumes


While our ESX server is still running, we migrate the managed volumes onto the new MDisks
by using image mode volumes. The command to perform this task is the svctask
migratetoimage command, which is shown in Example 6-48.

Example 6-48 Migrate the volumes to image mode volumes


IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk
ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX
IBM_2145:ITSO-CLS1:ITSO_admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk
ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
26 ESX_IVD_SLES online image 5
MDG_IVD_ESX 120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 ESX_IVD_W2K3 online image 5
MDG_IVD_ESX 100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>

During the migration, our ESX server is unaware that its data is being physically moved
between storage subsystems. We can continue to run and use the VMs that are running on
the server.

You can check the migration status by using the svcinfo lsmigrate command, as shown in
Example 6-49.

Example 6-49 The svcinfo lsmigrate command and output


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 2
migrate_source_vdisk_index 29
migrate_target_mdisk_index 27
migrate_target_mdisk_grp 5

328 Implementing the IBM System Storage SAN Volume Controller V7.4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 12
migrate_source_vdisk_index 30
migrate_target_mdisk_index 26
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin>

After the migration completes, the image mode volumes are ready to be removed from the
ESX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.

6.7.7 Removing the LUNs from the IBM SAN Volume Controller
Your ESX server’s configuration determines in what order your LUNs are removed from the
control of the SVC, and whether you must reboot the ESX server and suspend the VMware
guests.

In our example, we moved the VM disks. Therefore, to remove these LUNs from the control of
the SVC, we must stop and suspend all of the VMware guests that are using this LUN.
Complete the following steps:
1. Check which SCSI LUN IDs are assigned to the migrated disks by using the svcinfo
lshostvdiskmap command, as shown in Example 6-50. Compare the volume UID and sort
out the information.

Example 6-50 Note the SCSI LUN IDs


IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
1 Nile 0 30 ESX_SLES_IVD
210000E08B892BCD 60050768018301BF280000000000002A
1 Nile 1 29 ESX_W2k3_IVD
210000E08B892BCD 60050768018301BF2800000000000029

IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count
0 vdisk_A 0 io_grp0 online
2 MDG_Image 36.0GB image
29 ESX_W2k3_IVD 0 io_grp0 online
4 MDG_ESX_VD 70.0GB striped
60050768018301BF2800000000000029 0 1
30 ESX_SLES_IVD 0 io_grp0 online
4 MDG_ESX_VD 60.0GB striped
60050768018301BF280000000000002A 0 1
IBM_2145:ITSO-CLS1:ITSO_admin>

Chapter 6. Data migration 329


2. Shut down and suspend all guests that are using the LUNs. You can use the same method
that is described in “Moving VMware guest LUNs” on page 320 to identify the guests that
are using this LUN.
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command, as
shown in Example 6-51. To confirm that the volumes were removed, use the svcinfo
lshostvdiskmap command, which shows that these volumes are no longer mapped to the
ESX server.

Example 6-51 Remove the volumes from the host


IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD

4. Remove the volumes from the SVC by using the svctask rmvdisk command, which
makes the MDisks unmanaged, as shown in Example 6-52.

Cached data: When you run the svctask rmvdisk command, the SVC first confirms
that there is no outstanding dirty cached data for the volume that is being removed. If
uncommitted cached data still exists, the command fails with the following error
message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk

You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.

The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data exists to destage and how busy the I/O
subsystem is determine how long this command takes to complete.

You can check whether the volume has uncommitted data in the cache by using the
svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute.
This attribute has the following meanings:
򐂰 empty: No modified data exists in the cache.
򐂰 not_empty: Modified data might exist in the cache.
򐂰 corrupt: Modified data might exist in the cache, but the data was lost.

Example 6-52 Remove the volumes from the SVC


IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdisk ESX_W2k3_IVD
IBM_2145:ITSO-CLS1:ITSO_admin>svctask rmvdisk ESX_SLES_IVD
IBM_2145:ITSO-CLS1:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID

26 ESX_IVD_SLES online unmanaged


120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 ESX_IVD_W2K3 online unmanaged
100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin>

330 Implementing the IBM System Storage SAN Volume Controller V7.4
5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the ESX server. Remember in Example 6-50 on
page 329, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem,
use the same SCSI LUN IDs that you used in the SVC.

Important: This step is the last step that you can perform and still safely back out of
any changes made so far.

Up to this point, you can reverse all of the following actions that you performed to get
the server back online without data loss:
򐂰 Remap and remask the LUNs back to the SVC.
򐂰 Run the svctask detectmdisk command to rediscover the MDisks.
򐂰 Re-create the volumes with the svctask mkvdisk command.
򐂰 Remap the volumes back to the server with the svctask mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data
loss.

6. By using the VMware Management Console, rescan to discover the new volume.
Figure 6-95 shows the view before the rescan. Figure 6-96 on page 332 shows the view
after the rescan. The size of the LUN changed because we moved to another LUN on
another storage subsystem.

Figure 6-95 Before adapter rescan

Chapter 6. Data migration 331


Figure 6-96 After adapter rescan

During the rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with a new vmhba address and VMware recognizes it
as our VMWARE-GUESTS disk.
We are now ready to restart the VMware guests.
7. To ensure that the MDisks are removed from the SVC, run the svctask detectmdisk
command. The MDisks are discovered as offline and then automatically removed when
the SVC determines that no volumes are associated with these MDisks.

6.8 Migrating AIX SAN disks to SVC volumes


In this section, we describe how to move the two LUNs from an AIX server, which is directly
off our DS4000 storage subsystem, over to the SVC.

We manage those LUNs with the SVC, move them between other managed disks, and then
move them back to image mode disks so that those LUNs can then be masked and mapped
back to the AIX server directly.

By using this example, you can perform any of the following tasks in your environment:
򐂰 Move an AIX server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC, which is the first task that you perform when you are introducing
the SVC into your environment.
This section shows that your host downtime is only a few minutes while you remap and
remask disks by using your storage subsystem LUN management tool. This step starts in
6.8.2, “Preparing your IBM SAN Volume Controller to virtualize disks” on page 335.

332 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Move data between storage subsystems while your AIX server is still running and
servicing your business application.
You can perform this task if you are removing a storage subsystem from your SAN
environment and you want to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. This step is described in 6.8.4, “Migrating image mode volumes to volumes”
on page 342.
򐂰 Move your AIX server’s LUNs back to image mode volumes so that they can be remapped
and remasked directly back to the AIX server.
This step starts in 6.8.5, “Preparing to migrate from the IBM SAN Volume Controller” on
page 344.

Use these tasks individually or together to migrate your AIX server’s LUNs from one storage
subsystem to another storage subsystem by using the SVC as your migration tool. If you do
not use all three tasks, you can introduce or remove the SVC from your environment.

The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.

Our AIX environment is shown in Figure 6-97.

Zoning for Migration Scenarios

AIX
Host

SAN

Green Zone

IBM or OEM
Storage
Subsystem

Figure 6-97 AIX SAN environment

Figure 6-97 also shows that our AIX server is connected to our SAN infrastructure. It has two
LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.

The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the
itsoaixvg1 LVM group, as shown in Example 6-53 on page 334.

Chapter 6. Data migration 333


Example 6-53 AIX SAN configuration
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 1814 DS4700 Disk Array Device
hdisk4 Available 1D-08-02 1814 DS4700 Disk Array Device
#lspv
hdisk0 0009cddaea97bf61 rootvg active
hdisk1 0009cdda43c9dfd5 rootvg active
hdisk2 0009cddabaef1d99 rootvg active
hdisk3 0009cdda0a4c0dd5 itsoaixvg active
hdisk4 0009cdda0a4d1a64 itsoaixvg1 active
#

Our AIX server represents a typical SAN environment with a host directly by using LUNs that
were created on a SAN storage subsystem, as shown in Figure 6-97 on page 333.

The AIX server’s HBA cards are zoned so that they are in the Green (dotted line) Zone with
our storage subsystem.

The two LUNs, hdisk3 and hdisk4, were defined on the storage subsystem. By using LUN
masking, they are directly available to our AIX server.

6.8.1 Connecting the IBM SAN Volume Controller to your SAN fabric
This section describes the steps to take to introduce the SVC into your SAN environment.
Although this section summarizes only these activities, you can accomplish this task without
any downtime to any host or application that also uses your SAN.

If an SVC is already connected, skip to 6.8.2, “Preparing your IBM SAN Volume Controller to
virtualize disks” on page 335.

Important: Be careful when you are connecting the SVC to your SAN because this action
requires you to connect cables to your SAN switches and alter your switch zone
configuration. Performing these tasks incorrectly can render your SAN inoperable, so
ensure that you fully understand the effect of your actions.

Complete the following tasks to connect the SVC to your SAN fabric:
1. Assemble your SVC components (nodes, uninterruptible power supply unit, and redundant
ac-power switches), cable the SVC correctly, power on the SVC, and verify that the SVC is
visible on your SAN.
2. Create and configure your SVC system.
3. Create the following zones:
– An SVC node zone (our Black Zone, as shown in Example 6-66 on page 344)
– A storage zone (our Red Zone)
– A host zone (our Blue Zone)

Figure 6-98 on page 335 shows our environment.

334 Implementing the IBM System Storage SAN Volume Controller V7.4
Zoning for Migration Scenarios

AIX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-98 AIX host with SVC attached

6.8.2 Preparing your IBM SAN Volume Controller to virtualize disks


This section describes the preparatory tasks that we perform before we take our AIX server
offline. These tasks are all nondisruptive activities and do not affect your SAN fabric or your
existing SVC configuration (if you already have a production SVC in place).

Creating a storage pool


When we move the two AIX LUNs to the SVC, they first are used in image mode; therefore,
we must create a storage pool to hold those disks. We must create an empty storage pool for
these disks by using the commands that are shown in Example 6-54. We name the storage
pool, which will hold our LUNs, aix_imgmdg.

Example 6-54 Create an empty storage pool (mdiskgrp)


IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512
MDisk Group, id [7], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size
free_capacity virtual_capacity used_capacity real_capacity overallocation
warning

7 aix_imgmdg online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
IBM_2145:ITSO-CLS2:ITSO_admin>

Chapter 6. Data migration 335


Creating our host definition
If you prepared the zones correctly, the SVC can see the AIX server’s HBAs on the fabric.
(Our host had only one HBA.)

First, we get the WWN for our AIX server’s HBA because we have many hosts that are
connected to our SAN fabric and in the Blue Zone. We want to ensure that we have the
correct WWN to reduce our AIX server’s downtime. Example 6-55 shows the commands to
get the WWN; our host has a WWN of 10000000C932A7FB.

Example 6-55 Discover your WWN


#lsdev -Ccadapter|grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC Adapter
#lscfg -vpl fcs0
fcs0 U0.1-P2-I4/Q1 FC Adapter

Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC

Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1
#lscfg -vpl fcs1
fcs1 U0.1-P2-I5/Q1 FC Adapter

Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A67B
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495

336 Implementing the IBM System Storage SAN Volume Controller V7.4
Network Address.............10000000C932A800
ROS Level and ID............02C03891
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C932A800
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1
Device Specific.(YL)........U0.1-P2-I5/Q1

PLATFORM SPECIFIC

Name: fibre-channel
Model: LP9000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1
##

The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which were
not yet allocated to a host, that the SVC can see on the SAN fabric. Example 6-56 shows the
output of the host WWNs that it found in our SAN fabric. (If the port is not shown, a zone
configuration problem exists.)

Example 6-56 Available host WWNs

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lshbaportcandidate
id
10000000C932A7FB
10000000C932A800
210000E08B89B8C0
IBM_2145:ITSO-CLS2:ITSO_admin>

After we verify that the SVC can see our host (Kanaga), we create the host entry and assign
the WWN to this entry, as shown with the commands in Example 6-57.

Example 6-57 Create the host entry


IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkhost -name Kanaga -hbawwpn
10000000C932A7FB:10000000C932A800
Host, id [5], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lshost Kanaga
id 5
name Kanaga
port_count 2
type generic
mask 1111
iogrp_count 4

Chapter 6. Data migration 337


WWPN 10000000C932A800
node_logged_in_count 2
state inactive
WWPN 10000000C932A7FB
node_logged_in_count 2
state inactive
IBM_2145:ITSO-CLS2:ITSO_admin>

Verifying that we can see our storage subsystem


If we performed the zoning correctly, the SVC can see the storage subsystem with the
svcinfo lscontroller command, as shown in Example 6-58.

Example 6-58 Discover the storage controller


IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814
IBM_2145:ITSO-CLS2:admin>

Names: The svctask chcontroller command enables you to change the discovered
storage subsystem name in the SVC. In complex SANs, we suggest that you rename your
storage subsystem to a more meaningful name.

Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (if many available unmanaged MDisks are seen by the SVC), we obtain the LUN
serial numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Figure 6-99 on
page 339 shows disk serial number kanage_lun0.

338 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-99 Obtaining disk serial number kanage_lun0

Figure 6-100 shows disk serial number kanga_Lun1.

Figure 6-100 Obtaining disk serial number kanga_Lun1

We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and
give them back to the host as volumes.

Chapter 6. Data migration 339


6.8.3 Moving the LUNs to the IBM SAN Volume Controller
In this step, we move the LUNs that are assigned to the AIX server and reassign them to the
SVC.

Because we want to move only the LUN that holds our application and data files, we move
that LUN without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.

Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver
(SDD) is installed on the AIX server. You can install the SDD in advance; however, it might
require an outage of your host to install the SDD in advance.

Complete the following steps to move both LUNs at the same time:
1. Confirm that the SDD is installed.
2. Complete the following steps to unmount and vary off the VGs:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the varyoffvg
VOLUMEGROUP_NAME command.
Example 6-59 shows the commands that were run on Kanaga.

Example 6-59 AIX command sequence


#varyoffvg itsoaixvg
#varyoffvg itsoaixvg1
#lsvg
rootvg
itsoaixvg
itsoaixvg1
#lsvg -o
rootvg

3. By using Storage Manager (our storage subsystem management tool), the disks can be
unmapped and unmasked from the AIX server and remapped and remasked as disks of
the SVC.
4. From the SVC, discover the new disks by using the svctask detectmdisk command. The
disks are discovered and named mdiskN, where N is the next available MDisk number
(starting from 0). Example 6-60 shows the commands that were used to discover our
MDisks and to verify that the correct MDisks are available.

Example 6-60 Discover the new MDisks


IBM_2145:ITSO-CLS2:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID

24 mdisk24 online unmanaged


5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000

340 Implementing the IBM System Storage SAN Volume Controller V7.4
25 mdisk25 online unmanaged
8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>

Important: Match your discovered MDisk serial numbers (the UID on the svcinfo
lsmdisk command task display) with the serial number that you discovered earlier, as
shown in Figure 6-99 on page 339 and Figure 6-100 on page 339).

5. After you verify that the correct MDisks are available, rename them to avoid confusion in
the future when you perform other MDisk-related tasks, as shown in Example 6-61.

Example 6-61 Rename the MDisks


IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name Kanaga_AIX mdisk24
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name Kanaga_AIX1 mdisk25
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online unmanaged
8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>

6. Create the image mode volumes by using the svctask mkvdisk command and the option
-vtype image, as shown in Example 6-62. This command virtualizes the disks in the same
layout as though they were not virtualized.

Example 6-62 Create the image mode volumes


IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0
-vtype image -mdisk Kanaga_AIX -name IVD_Kanaga
Virtual Disk, id [8], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0
-vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1
Virtual Disk, id [9], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>

7. Map the new image mode volumes to the host, as shown in Example 6-63.

Example 6-63 Map the volumes to the host


IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1
Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>

FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.

Chapter 6. Data migration 341


Complete the following steps to put the image mode volumes online:
1. Remove the old disk definitions, if you have not done so already.
2. Run the cfgmgr -vs command to rediscover the available LUNs.
3. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
4. Mount your file systems with the mount /MOUNT_POINT command.

You are ready to start your application.

6.8.4 Migrating image mode volumes to volumes


While the AIX server is still running and our file systems are in use, we migrate the image
mode volumes onto striped volumes, spreading the extents over three other MDisks.

Preparing MDisks for striped mode volumes


From our storage subsystem, we performed the following tasks:
򐂰 Created and allocated three LUNs to the SVC
򐂰 Discovered them as MDisks
򐂰 Renamed these LUNs to more meaningful names
򐂰 Created a storage pool
򐂰 Put all of these MDisks into this storage pool

You can see the output of our commands in Example 6-64.

Example 6-64 Create a storage pool


IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkmdiskgrp -name aix_vd -ext 512
IBM_2145:ITSO-CLS2:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 mdisk26 online unmanaged
6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 mdisk27 online unmanaged
6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 mdisk28 online unmanaged
6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name aix_vd0 mdisk26
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name aix_vd1 mdisk27
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name aix_vd2 mdisk28
IBM_2145:ITSO-CLS2:ITSO_admin>
IBM_2145:ITSO-CLS2:ITSO_admin>svctask addmdisk -mdisk aix_vd0 aix_vd
IBM_2145:ITSO-CLS2:ITSO_admin>svctask addmdisk -mdisk aix_vd1 aix_vd

342 Implementing the IBM System Storage SAN Volume Controller V7.4
IBM_2145:ITSO-CLS2:ITSO_admin>svctask addmdisk -mdisk aix_vd2 aix_vd
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>

Migrating the volumes


We are ready to migrate the image mode volumes onto striped volumes by using the svctask
migratevdisk command, as shown in Example 6-24 on page 306.

While the migration is running, our AIX server is still running and we can continue accessing
the files.

To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-65. Listing the storage pool by using the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pool is slowly increasing while those extents
are moved to the new storage pool.

Example 6-65 Migrating image mode volumes to striped volumes


IBM_2145:ITSO-CLS2:ITSO_admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp
aix_vd
IBM_2145:ITSO-CLS2:ITSO_admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp
aix_vd

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 10
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 9
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS2:ITSO_admin>

Chapter 6. Data migration 343


After this task is complete, the volumes are spread over three MDisks in the aix_vd storage
pool, as shown in Example 6-66. The old storage pool is empty.

Example 6-66 Migration complete


IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdiskgrp aix_vd
id 6
name aix_vd
status online
mdisk_count 3
vdisk_count 2
capacity 18.0GB
extent_size 512
free_capacity 5.0GB
virtual_capacity 13.00GB
used_capacity 13.00GB
real_capacity 13.00GB
overallocation 72
warning 0
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdiskgrp aix_imgmdg
id 7
name aix_imgmdg
status online
mdisk_count 2
vdisk_count 0
capacity 13.0GB
extent_size 512
free_capacity 13.0GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0
IBM_2145:ITSO-CLS2:ITSO_admin>

Our migration to the SVC is complete. You can remove the original MDisks from the SVC and
you can remove these LUNs from the storage subsystem.

If these LUNs are the LUNs that were used last on our storage subsystem, we can remove
these LUNs from our SAN fabric.

6.8.5 Preparing to migrate from the IBM SAN Volume Controller


Before we change the AIX server’s LUNs from being accessed by the SVC as volumes to
being accessed directly from the storage subsystem, we must convert the volumes into image
mode volumes.

You can perform this task for one of the following reasons:
򐂰 You purchased a new storage subsystem and you were using the SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a volume onto another volume and you no
longer need that host that is connected to the SVC.

344 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
򐂰 Changes to your environment no longer require this host to use the SVC.

Other preparatory tasks need to be performed before we shut down the host and reconfigure
the LUN masking and mapping. This section describes those tasks.

If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as shown in Figure 6-101.

Zoning for Migration Scenarios

LINUX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-101 Environment with the SVC

Making fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the
new storage subsystem to the Red Zone so that the SVC can communicate with it directly.

Create a Green Zone for our host to use when we are ready for it to access the disk directly
after it is removed from the SVC. (It is assumed that you created the necessary zones.)

After your zone configuration is set up correctly, the SVC sees the new storage subsystem’s
controller by using the svcinfo lscontroller command, as shown in Example 6-67 on
page 346. It is also useful to rename the controller to a more meaningful name by using the
svctask chcontroller -name command.

Chapter 6. Data migration 345


Example 6-67 Discovering the new storage subsystem
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814 FAStT
IBM_2145:ITSO-CLS2:ITSO_admin>

Creating LUNs
On our storage subsystem, we created two LUNs and masked them so that the SVC can see
them. We eventually give these LUNs directly to the host and remove the volumes that the
host is using. To check that the SVC can use the LUNs, run the svctask detectmdisk
command, as shown in Example 6-68.

In our example, we use two 10 GB LUNs that are on the DS4500 subsystem. Therefore, in
this step, we migrate back to image mode volumes and to another subsystem in one step. We
deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they
appear offline here.

Example 6-68 Discover the new MDisks


IBM_2145:ITSO-CLS2:ITSO_admin>svctask detectmdisk
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX offline managed 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 offline managed 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
29 mdisk29 online unmanaged
10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 mdisk30 online unmanaged
10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>

Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are used by other
activities. Also, we create the storage pools to hold our new MDisks, as shown in
Example 6-69 on page 347.

346 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 6-69 Rename the MDisks
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name AIX_MIG mdisk29
IBM_2145:ITSO-CLS2:ITSO_admin>svctask chmdisk -name AIX_MIG1 mdisk30
IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512
MDisk Group, id [3], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 KANAGA_AIXMIG online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
6 aix_vd online 3 2
18.0GB 512 5.0GB 13.00GB 13.00GB
13.00GB 72 0
7 aix_imgmdg offline 2 0
13.0GB 512 13.0GB 0.00MB 0.00MB
0.00MB 0 0
IBM_2145:ITSO-CLS2:ITSO_admin>

Now, our SVC environment is ready for the volume migration to image mode volumes.

6.8.6 Migrating the managed volumes


While our AIX server is still running, we migrate the managed volumes onto the new MDisks
by using image mode volumes by using the svctask migratetoimage command, which is
shown in Example 6-70.

Example 6-70 Migrate the volumes to image mode volumes


IBM_2145:ITSO-CLS2:ITSO_admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk
AIX_MIG -mdiskgrp KANAGA_AIXMIG
IBM_2145:ITSO-CLS2:ITSO_admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk
AIX_MIG1 -mdiskgrp KANAGA_AIXMIG
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX offline managed 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 offline managed 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000

Chapter 6. Data migration 347


29 AIX_MIG online image 3
KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 AIX_MIG1 online image 3
KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 9
migrate_target_mdisk_index 30
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 8
migrate_target_mdisk_index 29
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS2:ITSO_admin>

During the migration, our AIX server is unaware that its data is being moved physically
between storage subsystems.

After the migration is complete, the image mode volumes are ready to be removed from the
AIX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystems tool.

6.8.7 Removing the LUNs from the IBM SAN Volume Controller
The next step requires downtime while we remap and remask the disks so that the host sees
them directly through the Green Zone.

Because our LUNs hold data files only and we use a unique VG, we can remap and remask
the disks without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.

Before you start: Moving LUNs to another storage system might need a driver other than
SDD. Check with the storage subsystems vendor to see which driver you need. You might
be able to install this driver in advance.

Complete the following steps to remove the SVC:


1. Confirm that the correct device driver for the new storage subsystem is loaded. Because
we are moving to a DS4500, we can continue to use the SDD.
2. Complete the following steps to shut down any applications and unmount the file systems:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the varyoffvg
VOLUMEGROUP_NAME command.

348 Implementing the IBM System Storage SAN Volume Controller V7.4
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command, as
shown in Example 6-71. To confirm that you removed the volumes, use the svcinfo
lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX
server.

Example 6-71 Remove the volumes from the host


IBM_2145:ITSO-CLS2:ITSO_admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga
IBM_2145:ITSO-CLS2:ITSO_admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lshostvdiskmap Kanaga
IBM_2145:ITSO-CLS2:ITSO_admin>

4. Remove the volumes from the SVC by using the svctask rmvdisk command, which
makes the MDisks unmanaged, as shown in Example 6-72.

Cached data: When you run the svctask rmvdisk command, the SVC first confirms
that there is no outstanding dirty cached data for the volume that is being removed. If
uncommitted cached data still exists, the command fails with the following error
message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk

You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.

The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data there is to destage and how busy the I/O
subsystem is determine how long this command takes to complete.

You can check whether the volume has uncommitted data in the cache by using the
svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute.
This attribute has the following meanings:
򐂰 empty: No modified data exists in the cache.
򐂰 not_empty: Modified data might exist in the cache.
򐂰 corrupt: Modified data might exist in the cache, but any modified data was lost.

Example 6-72 Remove the volumes from the SVC


IBM_2145:ITSO-CLS2:ITSO_admin>svctask rmvdisk IVD_Kanaga
IBM_2145:ITSO-CLS2:ITSO_admin>svctask rmvdisk IVD_Kanaga1
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
29 AIX_MIG online unmanaged
10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 AIX_MIG1 online unmanaged
10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin>

Chapter 6. Data migration 349


5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the AIX server.

Important: This step is the last step that you can perform and still safely back out of
any changes that you made.

Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
򐂰 Remap and remask the LUNs back to the SVC.
򐂰 Run the svctask detectmdisk command to rediscover the MDisks.
򐂰 Re-create the volumes with the svctask mkvdisk command.
򐂰 Remap the volumes back to the server with the svctask mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data
loss.

We are ready to access the LUNs from the AIX server. If all of the zoning, LUN masking, and
mapping were successful, our AIX server boots as though nothing happened. Complete the
following steps:
1. Run the cfgmgr -S command to discover the storage subsystem.
2. Use the lsdev -Cc disk command to verify the discovery of the new disk.
3. Remove the references to all of the old disks. Example 6-73 shows the removal by using
SDD.

Example 6-73 Remove references to old paths by using SDD


#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk5 Defined 1Z-08-02 SAN volume Controller Device
hdisk6 Defined 1Z-08-02 SAN volume Controller Device
hdisk7 Defined 1D-08-02 SAN volume Controller Device
hdisk8 Defined 1D-08-02 SAN volume Controller Device
hdisk10 Defined 1Z-08-02 SAN volume Controller Device
hdisk11 Defined 1Z-08-02 SAN volume Controller Device
hdisk12 Defined 1D-08-02 SAN volume Controller Device
hdisk13 Defined 1D-08-02 SAN volume Controller Device
vpath0 Defined Data Path Optimizer Pseudo Device Driver
vpath1 Defined Data Path Optimizer Pseudo Device Driver
vpath2 Defined Data Path Optimizer Pseudo Device Driver
# for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done
hdisk5 deleted
hdisk6 deleted
hdisk7 deleted
hdisk8 deleted
hdisk10 deleted
hdisk11 deleted
hdisk12 deleted
hdisk13 deleted
#for i in 0 1 2; do rmdev -dl vpath$i -R;done
vpath0 deleted

350 Implementing the IBM System Storage SAN Volume Controller V7.4
vpath1 deleted
vpath2 deleted
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device
#

Example 6-74 shows the removal by using SDDPCM.

Example 6-74 Remove references to old paths by using SDDPCM


# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Defined 1D-08-02 MPIO FC 2145
hdisk4 Defined 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145
# for i in 3 4; do rmdev -dl hdisk$i -R;done
hdisk3 deleted
hdisk4 deleted
# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk5 Available 1D-08-02 MPIO FC 2145

4. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
5. Mount your file systems by using the mount /MOUNT_POINT command.
You are ready to start your application.
6. To ensure that the MDisks are removed from the SVC, run the svctask detectmdisk
command. The MDisks first are discovered as offline. Then, they removed automatically
after the SVC determines that no volumes are associated with these MDisks.

6.9 Using IBM SAN Volume Controller for storage migration


The primary use of the SVC is not as a storage migration tool. However, the advanced
capabilities of the SVC enable us to use the SVC as a storage migration tool. Therefore, you
can add the SVC temporarily to your SAN environment to copy the data from one storage
subsystem to another storage subsystem. The SVC enables you to copy image mode
volumes directly from one subsystem to another subsystem while host I/O is running. The
only required downtime is when the SVC is added to and removed from your SAN
environment.

To use the SVC for migration purposes only, complete the following steps:
1. Add the SVC to your SAN environment.
2. Prepare the SVC.
3. Depending on your operating system, unmount the selected LUNs or shut down the host.

Chapter 6. Data migration 351


4. Add the SVC between your storage and the host.
5. Mount the LUNs or start the host again.
6. Start the migration.
7. After the migration process is complete, unmount the selected LUNs or shut down the
host.
8. Remove the SVC from your SAN.
9. Mount the LUNs or start the host again.

The migration is complete.

As you can see, little downtime is required. If you prepare everything correctly, you can
reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host
does not hinder the performance while the migration progresses.

To use the SVC for storage migrations, complete the steps that are described in the following
sections:
򐂰 6.5.2, “Adding the SAN Volume Controller between the host system and the DS 3400” on
page 259
򐂰 6.5.6, “Migrating the volume from image mode to image mode” on page 283
򐂰 6.5.7, “Removing image mode data from the IBM SAN Volume Controller” on page 291

6.10 Using volume mirroring and thin-provisioned volumes


together
In this section, we show that you can use the volume mirroring feature and thin-provisioned
volumes together to move data from a fully allocated volume to a thin-provisioned volume.

6.10.1 Zero detect feature


The zero detect feature for thin-provisioned volumes enables clients to reclaim unused
allocated disk space (zeros) when a fully allocated volume is converted to a thin-provisioned
volume by using volume mirroring.

To migrate from a fully allocated volume to a thin-provisioned volume, complete the following
steps:
1. Add the target thin-provisioned copy.
2. Wait for synchronization to complete.
3. Remove the source fully allocated copy.

By using this feature, clients can free managed disk space easily and make better use of their
storage without the need to purchase any other functions for the SVC.

Volume mirroring and thin-provisioned volume functions are included in the base virtualization
license. Clients with thin-provisioned storage on an existing storage system can migrate their
data under SVC management by using thin-provisioned volumes without having to allocate
more storage space.

Zero detect works only if the disk contains zeros. An uninitialized disk can contain anything,
unless the disk is formatted (for example, by using the -fmtdisk flag on the mkvdisk
command).

352 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 6-102 shows the thin-provisioned volume zero detect concept.

Figure 6-102 Thin-provisioned volume zero detect feature

Figure 6-103 shows the thin-provisioned volume organization.

Figure 6-103 Thin-provisioned volume organization

Chapter 6. Data migration 353


As shown in Figure 6-103 on page 353, a thin-provisioned volume features the following
components:
򐂰 Used capacity
This term specifies the portion of real capacity that is used to store data. For
non-thin-provisioned copies, this value is the same as the volume capacity. If the volume
copy is thin-provisioned, the value increases from zero to the real capacity value as more
of the volume is written to.
򐂰 Real capacity
This capacity is the real allocated space in the storage pool. In a thin-provisioned volume,
this value can differ from the total capacity.
򐂰 Free data
This value specifies the difference between the real capacity and the used capacity
values. If the free data capacity reaches the used capacity and if the volume is configured
with the -autoexpand option, the SVC automatically expands the allocated space for this
volume to keep this value equal to the real capacity.
򐂰 Grains
This value is the smallest unit into which the allocated space can be divided.
򐂰 Metadata
This value is allocated in the real capacity, and it tracks the used capacity, real capacity,
and free capacity.

6.10.2 Volume mirroring with thin-provisioned volumes


The following example shows the use of the volume mirror feature with thin-provisioned
volumes:
1. We create a fully allocated volume of 15 GiB named VD_Full, as shown in Example 6-75.

Example 6-75 VD_Full creation example


IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk
0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full
Virtual Disk, id [2], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_Full
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status offline
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 15.00GB
type striped
formatted yes
.
.
vdisk_UID 60050768018401BF280000000000000B
mdisk_grp_name MDG_DS47
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100

354 Implementing the IBM System Storage SAN Volume Controller V7.4
2. We add a thin-provisioned volume copy with the volume mirroring option by using the
addvdiskcopy command and the autoexpand parameter, as shown in Example 6-76.

Example 6-76 The addvdiskcopy command


IBM_2145:ITSO-CLS2:ITSO_admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9
-vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full
VDisk [2] copy [1] successfully created

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_Full


id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
tsync_rate 50
copy_count 2
copy_id 0
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
fused_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync no
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

Chapter 6. Data migration 355


As you can see in Example 6-77, the VD_Full has a copy_id 1 where the used_capacity is
0.41 MB, which is equal to the metadata, because only zeros exist in the disk.
The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in
the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real
capacity minus the used capacity.
If zeros are written on the disk, the thin-provisioned volume does not use space.
Example 6-77 shows that the thin-provisioned volume does not use space even when the
capacities are in sync.

Example 6-77 Thin-provisioned volume display


IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisksyncprogress 2
vdisk_id vdisk_name copy_id progress
estimated_completion_time
2 VD_Full 0 100
2 VD_Full 1 100
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_Full
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 2
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize

356 Implementing the IBM System Storage SAN Volume Controller V7.4
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

3. We can split the volume mirror or remove one of the copies, which keeps the
thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the
rmvdiskcopy command.
If you need your copy as a thin-provisioned clone, we suggest that you use the
splitvdiskcopy command because that command generates a new volume and you can
map to any server that you want.
If you need your copy because you are migrating from a previously fully allocated volume
to go to a thin-provisioned volume without any effect on the server operations, we suggest
that you use the rmvdiskcopy command. In this case, the original volume name is kept and
it remains mapped to the same server.
Example 6-78 shows the splitvdiskcopy command.

Example 6-78 The splitvdiskcopy command


IBM_2145:ITSO-CLS2:ITSO_admin>svctask splitvdiskcopy -copy 1 -name VD_TPV
VD_Full
Virtual Disk, id [7], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk -filtervalue name=VD*
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count fast_write_state
2 VD_Full 0 io_grp0 online
0 MDG_DS47 15.00GB striped
60050768018401BF280000000000000B 0 1 empty
7 VD_TPV 0 io_grp0 online
1 MDG_DS83 15.00GB striped
60050768018401BF280000000000000D 0 1 empty
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_TPV
id 7
name VD_TPV
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB

Chapter 6. Data migration 357


type striped
formatted no
vdisk_UID 60050768018401BF280000000000000D
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

Example 6-79 shows the rmvdiskcopy command.

Example 6-79 The rmvdiskcopy command


IBM_2145:ITSO-CLS2:ITSO_admin>svctask rmvdiskcopy -copy 0 VD_Full
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk -filtervalue name=VD*
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count fast_write_state
2 VD_Full 0 io_grp0 online
1 MDG_DS83 15.00GB striped
60050768018401BF280000000000000B 0 1 empty
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk 2
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1

358 Implementing the IBM System Storage SAN Volume Controller V7.4
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 1
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

Chapter 6. Data migration 359


360 Implementing the IBM System Storage SAN Volume Controller V7.4
7

Chapter 7. Advanced features for storage


efficiency
In this chapter, we introduce the basic concepts of dynamic data relocation and storage
optimization features. The IBM System Storage SAN Volume Controller (SVC) offers the
following software functions for storage efficiency, which we describe in this chapter:
򐂰 Easy Tier
򐂰 Thin provisioning
򐂰 Real-time Compression (RtC)

We provide a basic technical overview and the benefits of each feature. For more information
about planning and configuration, see the following IBM Redbooks publications:
򐂰 Easy Tier:
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
– IBM System Storage SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521
– IBM DS8000 Easy Tier, REDP-4667 (This concept is similar to SVC Easy Tier.)
򐂰 Thin provisioning:
– Thin Provisioning in an IBM SAN or IP SAN Enterprise Environment, REDP-4265
– DS8000 Thin Provisioning, REDP-4554 (similar concept to SVC thin provisioning)
򐂰 RtC:
– Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859
– Implementing IBM Real-time Compression in SAN Volume Controller and IBM
Storwize V7000, TIPS1083
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072

© Copyright IBM Corp. 2015. All rights reserved. 361


7.1 Introduction
In modern and complex application environments, the increasing and often unpredictable
demands for storage capacity and performance lead to issues of planning and optimization of
storage resources.

Consider the following typical storage management issues:


򐂰 Usually when a storage system is implemented, only a portion of the configurable physical
capacity is deployed. When the storage system runs out of the installed capacity and more
capacity is requested, a hardware upgrade is implemented to add physical resources to
the storage system. This new physical capacity can hardly be configured to keep an even
spread of the overall storage resources. Typically, the new capacity is allocated to fulfill
only new storage requests. The existing storage allocations do not benefit from the new
physical resources. Similarly, the new storage requests do not benefit from the existing
resources; only new resources are used.
򐂰 In a complex production environment, it is not always possible to optimize storage
allocation for performance. The unpredictable rate of storage growth and the fluctuations
in throughput requirements, which are I/O per second (IOPS), often lead to inadequate
performance. Furthermore, the tendency to use even larger volumes to simplify storage
management works against the granularity of storage allocation, and a cost-efficient
storage tiering solution becomes difficult to achieve. With the introduction of high
performing technologies, such as solid-state drives (SSD) or all flash arrays, this challenge
becomes even more important.
򐂰 The move to larger and larger physical disk drive capacities means that previous access
densities that were achieved with low-capacity drives can no longer be sustained.
򐂰 Any business has applications that are more critical than others, and a need exists for
specific application optimization. Therefore, the ability to relocate specific application data
to faster storage media is needed.
򐂰 Although more servers are purchased with local SSDs attached for better application
response time, the data distribution across these direct-attached SSDs and external
storage arrays must be carefully addressed. An integrated and automated approach is
crucial to achieve performance improvement without compromise to data consistency,
especially in a disaster recovery situation.

All of these issues deal with data placement and relocation capabilities or data volume
reduction. Most of these challenges can be managed by having spare resources available
and by moving data, and by the use of data mobility tools or operating systems features (such
as host-level mirroring) to optimize storage configurations. However, all of these corrective
actions are expensive in terms of hardware resources, labor, and service availability.
Relocating data among the physical storage resources that dynamically or effectively reduces
the amount of data, that is, transparently to the attached host systems, is becoming
increasingly important.

7.2 Easy Tier


In today’s storage market, SSDs and flash arrays are emerging as an attractive alternative to
hard disk drives (HDDs). Because of their low response times, high throughput, and
IOPS-energy-efficient characteristics, SSDs and flash arrays have the potential to allow your
storage infrastructure to achieve significant savings in operational costs. However, the current
acquisition cost per GB for SSDs or flash array is higher than for HDDs.

362 Implementing the IBM System Storage SAN Volume Controller V7.4
SSD and flash array performance depends greatly on workload characteristics; therefore,
they need to be used with HDDs for optimal performance.

Choosing the correct mix of drives and the correct data placement is critical to achieve
optimal performance at low cost. Maximum value can be derived by placing “hot” data with
high I/O density and low response time requirements on SSDs or flash arrays, and targeting
HDDs for “cooler” data that is accessed more sequentially and at lower rates.

Easy Tier automates the placement of data among different storage tiers. Easy Tier can be
enabled for internal and external storage. This SVC feature boosts your storage infrastructure
performance to achieve optimal performance through a software, server, and storage
solution. Additionally, the new, no-charge feature called storage pool balancing, introduced in
the 7.3 SVC firmware version, automatically moves extents within the same storage tier, from
heavily loaded to less-loaded managed disks (MDisks). Storage pool balancing ensures that
your data is optimally placed among all disks within storage pools.

7.2.1 Easy Tier concepts


The SVC implements Easy Tier enterprise storage functions, which were originally available
on IBM DS8000 and IBM XIV enterprise class storage systems. It enables automated
subvolume data placement throughout different or within the same storage tiers to intelligently
align the system with current workload requirements and to optimize the usage of SSDs or
flash arrays. This functionality includes the ability to automatically and non-disruptively
relocate data (at the extent level) from one tier to another tier or even within the same tier, in
either direction to achieve the best available storage performance for your workload in your
environment. Easy Tier reduces the I/O latency for hot spots, but it does not replace storage
cache. Both Easy Tier and storage cache solve a similar access latency workload problem,
but these two methods weigh differently in the algorithmic construction that is based on
“locality of reference”, recency, and frequency. Because Easy Tier monitors I/O performance
from the device end (after cache), Easy Tier can pick up the performance issues that cache
cannot solve and complement the overall storage system performance. Figure 7-1 on
page 364 shows the placement of the Easy Tier engine within the SVC software stack.

Chapter 7. Advanced features for storage efficiency 363


Figure 7-1 Easy Tier in the SVC software stack

In general, the storage environments’ I/O is monitored at a volume level and the entire volume
is always placed inside one appropriate storage tier. Determining the amount of I/O, moving
part of the underlying volume to an appropriate storage tier, and reacting to workload
changes are too complex for manual operation. This area is where Easy Tier feature can be
used.

Easy Tier is a performance optimization function because it automatically migrates (or moves)
extents that belong to a volume between different storage tiers (Figure 7-2 on page 365) or
the same storage tier (Figure 7-4 on page 367). Because this migration works at the extent
level, it is often referred to as sublogical unit number (LUN) migration. The movement of the
extents is online and unnoticed from the host’s point of view. As a result of extent movement,
the volume no longer has all its data in one tier but rather in two or three tiers. Figure 7-2 on
page 365 shows the basic Easy Tier principle of operation.

364 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-2 Easy Tier

You can enable Easy Tier on a volume basis. Easy Tier monitors the I/O activity and latency
of the extents on all Easy Tier enabled volumes over a 24-hour period. Based on the
performance log, Easy Tier creates an extent migration plan and dynamically moves high
activity or hot extents to a higher disk tier within the same storage pool. Easy Tier also moves
extents whose activity dropped off, or cooled, from a higher disk tier MDisk back to a lower tier
MDisk. When Easy Tier is running in storage pool rebalance mode, it moves extents from
busy MDisks to less busy MDisks of the same type.

7.2.2 SSD arrays and flash MDisks


The SSDs or flash arrays are treated no differently by the SVC than HDDs regarding RAID
arrays or MDisks.

The individual SSDs in the storage that is managed by the SVC are combined into an array,
usually in RAID 10 or RAID 5 format. It is unlikely that RAID 6 SSD arrays are used because
of the double parity overhead, with two logical SSDs used for parity only. A LUN is created on
the array and then presented to the SVC as a normal MDisk.

Chapter 7. Advanced features for storage efficiency 365


As is the case for HDDs, the SSD RAID array format helps to protect against individual SSD
failures. Depending on your requirements, you can achieve more high availability protection
above the RAID level by using volume mirroring.

The internal storage configuration of flash arrays can differ depending on an array vendor. But
regardless of the methods that are used to configure flash-based storage, the flash system
maps a volume to a host, in this case, the SVC. From the SVC perspective, the volume that is
presented from flash storage is also seen as a normal MDisk.

Starting with SVC DH8 nodes and firmware V7.3, up to two expansion drawers can be
connected to the SVC. Each drawer can have up to 24 SDDs and only SDD drives. The SDD
drives are then gathered together to form RAID arrays in the same way that RAID arrays are
formed in IBM Storwize systems.

After the creation of a RAID array, it appears as an MDisk of type ssd, which differs from
MDisks that are presented from external storage systems. Because the SVC does not know
from what kind of physical disks the presented MDisks are formed, the default MDisk type that
SVC adds to each external MDisk is enterprise. It is up to the users or administrators to
change the type of MDisks to ssd, enterprise, or nearline (NL).

To change a type of MDisk in the command-line interface (CLI), use the chmdisk command,
as shown in Example 7-1.

Example 7-1 Changing the MDisk tier


IBM_2145:ITSO_SVC2:superuser>lsmdisk -delim " "
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier encrypt
0 mdisk0 online unmanaged 100.0GB 0000000000000000 controller0
6005076400820008380000000000000000000000000000000000000000000000 enterprise no
1 mdisk1 online unmanaged 100.0GB 0000000000000001 controller0
6005076400820008380000000000000100000000000000000000000000000000 enterprise no

IBM_2145:ITSO_SVC2:superuser>chmdisk -tier nearline mdisk0

IBM_2145:ITSO_SVC2:superuser>lsmdisk -delim " "


id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier encrypt
0 mdisk0 online unmanaged 100.0GB 0000000000000000 controller0
6005076400820008380000000000000000000000000000000000000000000000 nearline no
1 mdisk1 online unmanaged 100.0GB 0000000000000001 controller0
6005076400820008380000000000000100000000000000000000000000000000 enterprise no

Note: The type of MDisk can also be changed in the GUI. From the animated menu on the
left side of the window, hover over Pools and select External Storage or MDisks by
Pools. Click the small plus sign (+) next to the storage controller name or storage pool
name, depending on whether you chose External Storage or MDisks by Pools to expand
the MDisks. Next, right-click an MDisk and choose Select Tier. Then, choose one of three
options to select the correct tier for your MDisk.

If you do not see the Tier column in the External Storage or MDisks by Pools view, right-click
the blue title row and select the Tier check box, as shown on Figure 7-3 on page 367.

366 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-3 Customizing the title row to show the Tier column

7.2.3 Disk tiers


The MDisks (LUNs) that are presented to the SVC cluster are likely to have different
performance attributes because of the type of disk or RAID array on which they reside. The
MDisks can be on 15 K revolutions per minute (RPM) Fibre Channel (FC) or serial-attached
SCSI (SAS) disk, nearline SAS or Serial Advanced Technology Attachment (SATA), or even
SSDs or flash storage systems.

The SVC does not automatically detect the type of MDisks, except for MDisks that are formed
out of SSD drives from attached expansion drawers. Instead, all external MDisks are initially
put into the enterprise tier, by default. Then, the administrator must manually change the tier
of MDisks and add them to storage pools. Depending on what type of disks are gathered to
form a storage pool, we distinguish two types of storage pools: single-tier and multitier.

Single-tier storage pools


Figure 7-4 shows a scenario in which a single storage pool is populated with MDisks that are
presented by an external storage controller. In this solution, the striped volumes can be
measured by Easy Tier and can benefit from storage pool balancing mode, which moves
extents between MDisks of the same type.

Figure 7-4 Single-tier storage pool with striped volumes

Chapter 7. Advanced features for storage efficiency 367


MDisks that are used in a single-tier storage pool must have the same hardware
characteristics, for example, the same RAID type, RAID array size, disk type, disk RPMs, and
controller performance characteristics.

Multitier storage pools


A multitier storage pool has a mix of MDisks with more than one type of disk tier attribute, for
example, a storage pool that contains a mix of enterprise and ssd MDisks. Figure 7-5 shows a
scenario in which a storage pool is populated with two different MDisk types: one type
belonging to an SSD array and one type belonging to an HDD array. Although this example
shows RAID 5 arrays, other RAID types can be used, as well.

Figure 7-5 Multitier storage pool with striped volumes

Adding SSDs to the pool means that more space also is now available for new volumes or
volume expansion.

Important: Image mode and sequential volumes are not candidates for Easy Tier
automatic data placement because all extents for those types of volumes must reside on
one, specific MDisk and cannot be moved.

The Easy Tier setting can be changed on a storage pool and volume basis. Depending on the
Easy Tier setting and the number of tiers in the storage pool, Easy Tier services might
function differently. Table 7-1 on page 369 shows possible combinations of Easy Tier settings.

368 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 7-1 EasyTier settings
Storage pool Easy Number of tiers in the Volume copy Easy Volume copy Easy
Tier setting storage pool Tier setting Tier status

Off One Off Inactive2

Off One On Inactive2

Off Two or three Off Inactive2

Off Two or three On Inactive2

Measure One Off Measured3

Measure One On Measured3

Measure Two or three Off Measured3

Measure Two or three On Measured3

Auto One Off Measured3

Auto One On Balanced4

Auto Two or three Off Measured3

Auto Two or three On Active5

On One Off Measured3

On One On Balanced4

On Two or three Off Measured3

On Two or three On Active5

Table notes:
1. If the volume copy is in image or sequential mode or is being migrated, the volume copy
Easy Tier status is measured instead of active.
2. When the volume copy status is inactive, no Easy Tier functions are enabled for that
volume copy.
3. When the volume copy status is measured, the Easy Tier function collects usage
statistics for the volume but automatic data placement is not active.
4. When the volume copy status is balanced, the Easy Tier function enables
performance-based pool balancing for that volume copy.
5. When the volume copy status is active, the Easy Tier function operates in automatic
data placement mode for that volume.
6. The default Easy Tier setting for a storage pool is Auto, and the default Easy Tier setting
for a volume copy is On. Therefore, Easy Tier functions, except pool performance
balancing, are disabled for storage pools with a single tier. Automatic data placement
mode is enabled for all striped volume copies in a storage pool with two or more tiers.

Figure 7-6 on page 370 shows the naming convention and all supported combinations of
storage tiering that are used by Easy Tier.

Chapter 7. Advanced features for storage efficiency 369


Figure 7-6 Easy Tier supported storage pools

7.2.4 Easy Tier process


The Easy Tier function includes the following four main processes:
򐂰 I/O Monitoring
This process operates continuously and monitors volumes for host I/O activity. It collects
performance statistics for each extent and derives averages for a rolling 24-hour period of
I/O activity.
Easy Tier makes allowances for large block I/Os; therefore, it considers only I/Os of up
to 64 KB as migration candidates.
This process is efficient and adds negligible processing overhead to the SVC nodes.
򐂰 Data Placement Advisor
The Data Placement Advisor uses workload statistics to make a cost benefit decision as to
which extents are to be candidates for migration to a higher performance tier.
This process also identifies extents that must be migrated back to a lower tier.
򐂰 Data Migration Planner (DMP)
By using the extents that were previously identified, the DMP builds the extent migration
plans for the storage pool. The DMP builds two plans:
– Automatic Data Relocation (ADR mode) plan to migrate extents across adjacent tiers
– Rebalance (RB mode) plan to migrate extents within the same tier
򐂰 Data Migrator
This process involves the actual movement or migration of the volume’s extents up to, or
down from, the higher disk tier. The extent migration rate is capped so that a maximum of
up to 30 MBps are migrated, which equates to approximately 3 TB per day that is migrated
between disk tiers.

370 Implementing the IBM System Storage SAN Volume Controller V7.4
When Easy Tier is enabled, it performs the following actions among three tiers, as presented
in Figure 7-6 on page 370:
򐂰 Promote
This action moves the relevant hot extents to a higher performing tier.
򐂰 Swap
This action exchanges a cold extent in an upper tier with a hot extent in a lower tier.
򐂰 Warm demote:
– Warm demote prevents performance overload of a tier by demoting a warm extent to
the lower tier.
– This action is triggered when bandwidth or IOPS exceeds a predefined threshold.
򐂰 Demote or cold demote
The coldest data is moved to a lower HDD tier. This action is only supported between HDD
tiers.
򐂰 Expanded cold demote
This action demotes the appropriate sequential workloads to the lowest tier to better use
Nearline disk bandwidth.
򐂰 Storage pool balancing:
– This action redistributes extents within a tier to balance utilization across MDisks for
maximum performance.
– Storage pool balancing moves hot extents from higher-utilized MDisks to lower-utilized
MDisks.
– Storage pool balancing exchanges extents between higher-utilized MDisks and
lower-utilized MDisks.
򐂰 Easy Tier attempts to migrate the most active volume extents up to SSD first.
򐂰 A previous migration plan and any queued extents that are not yet relocated are
abandoned.

Note: Extent migration occurs only between adjacent tiers. In a three-tiered storage pool,
Easy Tier will not move extents from SSDs directly to NL-SAS and vice versa without
moving the extents first to SAS drives.

Easy Tier extent migration types are presented on Figure 7-7 on page 372.

Chapter 7. Advanced features for storage efficiency 371


Figure 7-7 Easy Tier extent migration types

7.2.5 Easy Tier operating modes


Easy Tier includes the following main operating modes:
򐂰 Off
򐂰 Evaluation or measurement only
򐂰 Automatic data placement or extent migration
򐂰 Storage pool balancing

Easy Tier off mode


With Easy Tier turned off, no statistics are recorded and no cross-tier extent migration occurs.
In this mode, only storage pool balancing, which means that extents are migrated within the
same storage pool, is active.

Evaluation or measurement only mode


Easy Tier evaluation or measurement only mode collects usage statistics for each extent in a
single-tier storage pool where the Easy Tier value is set to On for both the volume and the
pool. This collection is typically done for a single-tier pool that contains only HDDs so that the
benefits of adding SSDs to the pool can be evaluated before any major hardware acquisition.

A dpa_heat.nodeid.yymmdd.hhmmss.data statistics summary file is created in the /dumps


directory of the SVC nodes. This file can be offloaded from the SVC nodes with the PuTTY
Secure Copy Client (PSCP) pscp -load command or by using the GUI, as described in IBM
System Storage SAN Volume Controller Best Practices and Performance Guidelines,
SG24-7521. A web browser is used to view the report that is created by the tool.

372 Implementing the IBM System Storage SAN Volume Controller V7.4
Automatic data placement or extent migration mode
In automatic data placement or extent migration operating mode, the storage pool parameter
-easytier on or auto must be set, and the volumes in the pool must have -easytier on. The
storage pool must also contain MDisks with different disk tiers (a multitiered storage pool).

Dynamic data movement is not apparent to the host server and application users of the data,
other than providing improved performance. Extents are automatically migrated, as explained
in “Implementation rules” on page 373.

The statistic summary file is also created in this mode. This file can be offloaded for input to
the advisor tool. The tool produces a report on the extents that are moved to a higher tier and
a prediction of performance improvement that can be gained if more higher-tier disks are
available.

Options: The Easy Tier function can be turned on or off at the storage pool level and at the
volume level.

Storage pool balancing


Storage pool balancing is a new feature within the 7.3 code. Although storage pool balancing
is associated with Easy Tier, it operates independently of Easy Tier and does not require an
Easy Tier license. This feature assesses the extents that are written in a pool and balances
them automatically across all MDisks within the pool. This process works with Easy Tier when
multiple classes of disks exist in a single pool.

The process will automatically balance existing data when new MDisks are added into an
existing pool even if the pool only contains a single type of drive. This does not mean that the
process will migrate extents from existing MDisks to achieve even extent distribution among
all, old and new, MDisks in the storage pool. Easy Tier RB within a tier migration plan is based
on performance and not the capacity of underlying MDisks.

Note: Storage pool balancing can be used to balance extents when mixing different size
disks of the same performance tier. For example, when adding larger capacity drives to a
pool with smaller capacity drives of the same class, storage pool balancing redistributes
the extents to take advantage of the additional performance of the new MDisks.

7.2.6 Implementation considerations


Easy Tier is a licensed feature, except for storage pool balancing, which is a no-charge
feature that is enabled, by default. Easy Tier comes as part of the SVC code. For Easy Tier to
migrate extents, you must have disk storage available that has different tiers, for example, a
mix of SSDs and HDDs.

Implementation rules
Remember the following implementation and operational rules when you use the IBM System
Storage Easy Tier function on the SVC:
򐂰 Easy Tier automatic data placement is not supported on image mode or sequential
volumes. I/O monitoring for these volumes is supported, but you cannot migrate extents on
these volumes unless you convert image or sequential volume copies to striped volumes.
򐂰 Automatic data placement and extent I/O activity monitors are supported on each copy of
a mirrored volume. Easy Tier works with each copy independently of the other copy.

Chapter 7. Advanced features for storage efficiency 373


Volume mirroring consideration: Volume mirroring can have different workload
characteristics on each copy of the data because reads are normally directed to the
primary copy and writes occur to both copies. Therefore, the number of extents that
Easy Tier migrates to the SSD tier might differ for each copy.

򐂰 If possible, the SVC creates volumes or volume expansions by using extents from MDisks
from the HDD tier. However, it uses extents from MDisks from the SSD tier, if necessary.

When a volume is migrated out of a storage pool that is managed with Easy Tier, Easy Tier
automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated, even if the volume is between
pools that both have Easy Tier automatic data placement enabled. Automatic data placement
for the volume is re-enabled when the migration is complete.

Limitations
When you use IBM System Storage Easy Tier on the SVC, Easy Tier has the following
limitations:
򐂰 Removing an MDisk by using the -force parameter
When an MDisk is deleted from a storage pool with the -force parameter, extents in use
are migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
򐂰 Migrating extents
When Easy Tier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts CLI command on that volume.
򐂰 Migrating a volume to another storage pool
When the SVC migrates a volume to a new storage pool, Easy Tier automatic data
placement between the two tiers is temporarily suspended. After the volume is migrated to
its new storage pool, Easy Tier automatic data placement between the generic SSD tier
and the generic HDD tier resumes for the moved volume, if appropriate.
When the SVC migrates a volume from one storage pool to another, it attempts to migrate
each extent to an extent in the new storage pool from the same tier as the original extent.
In several cases, such as where a target tier is unavailable, the other tier is used. For
example, the generic SSD tier might be unavailable in the new storage pool.
򐂰 Migrating a volume to image mode
Easy Tier automatic data placement does not support image mode. When a volume with
Easy Tier automatic data placement mode that is active is migrated to image mode, Easy
Tier automatic data placement mode is no longer active on that volume.
򐂰 Image mode and sequential volumes cannot be candidates for automatic data placement;
however, Easy Tier supports evaluation mode for image mode volumes.

7.2.7 Modifying the Easy Tier setting


The Easy Tier setting for storage pools and volumes can be changed only through the
command line. Use the chvdisk command to turn off or turn on Easy Tier on selected
volumes. Use the chmdiskgrp command to change the status of Easy Tier on selected
storage pools, as shown in Example 7-2 on page 375.

374 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 7-2 Changing the EasyTier setting
IBM_2145:ITSO_SVC1:superuser>lsvdisk test01
id 0
name test01
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name v7000_1_gen1_pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018E92083000000000000000
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name v7000_1_gen1_pool
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name v7000_1_gen1_pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier off

Chapter 7. Advanced features for storage efficiency 375


easy_tier_status measured
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 10.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name v7000_1_gen1_pool

IBM_2145:ITSO_SVC1:superuser>chvdisk -easytier on test01

IBM_2145:ITSO_SVC1:superuser>lsvdisk test01
id 0
name test01
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name v7000_1_gen1_pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018E92083000000000000000
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name v7000_1_gen1_pool

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0

376 Implementing the IBM System Storage SAN Volume Controller V7.4
mdisk_grp_name v7000_1_gen1_pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 10.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name v7000_1_gen1_pool

IBM_2145:ITSO_SVC1:superuser>lsmdiskgrp v7000_1_gen2_pool
id 1
name v7000_1_gen2_pool
status online
mdisk_count 3
vdisk_count 0
capacity 300.00GB
extent_size 1024
free_capacity 300.00GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0
easy_tier auto
easy_tier_status balanced
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 3
tier_capacity 300.00GB
tier_free_capacity 300.00GB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no

Chapter 7. Advanced features for storage efficiency 377


compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id
site_name
parent_mdisk_grp_id 1
parent_mdisk_grp_name v7000_1_gen2_pool
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
encrypt no

IBM_2145:ITSO_SVC1:superuser>chmdiskgrp -easytier off v7000_1_gen2_pool

IBM_2145:ITSO_SVC1:superuser>lsmdiskgrp v7000_1_gen2_pool
id 1
name v7000_1_gen2_pool
status online
mdisk_count 3
vdisk_count 0
capacity 300.00GB
extent_size 1024
free_capacity 300.00GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0
easy_tier off
easy_tier_status inactive
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 3
tier_capacity 300.00GB
tier_free_capacity 300.00GB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id
site_name
parent_mdisk_grp_id 1
parent_mdisk_grp_name v7000_1_gen2_pool
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
encrypt no

378 Implementing the IBM System Storage SAN Volume Controller V7.4
7.2.8 Monitoring tools
IBM Storage Tier Advisor Tool (STAT) is a Windows console application that analyzes heat
data files that are produced by Easy Tier. STAT produces a graphical display of the amount of
“hot” data per volume and predicts how additional flash drive (SSD) capacity, enterprise
drives, and nearline drives might improve the performance for the system by storage pool.

Heat data files are produced approximately once a day (that is, every 24 hours) when Easy
Tier is active on one or more storage pools and summarizes the activity per volume since the
prior heat data file was produced. On the SVC and Storwize serial products, the heat data file
is in the /dumps directory on the configuration node and named
dpa_heat.node_name.time_stamp.data.

Any existing heat data file is erased after seven days. The file must be offloaded by the user
and STAT must be invoked from a Windows command prompt console with the file specified
as a parameter. The user can also specify the output directory. STAT creates a set of HTML
files and the user can then open the resulting index.html in a browser to view the results.

Updates to STAT for SVC 7.3 added additional capability for reporting. As a result, when STAT
is run on a heat map file, an additional three CSV files are created and placed in the
Data_files directory.

IBM STAT can be downloaded from the IBM Support website:


http://www.ibm.com/support/docview.wss?uid=ssg1S4000935

Figure 7-8 shows the CSV files that are highlighted in the Data_files directory after running
STAT over an SVC heatmap.

Figure 7-8 CSV files created by STAT for Easy Tier

In addition to STAT, SVC 7.3 code now has an additional utility, which is a Microsoft SQL file
for creating additional graphical reports about the Easy Tier workload. The IBM STAT
Charting Utility takes the output of the three CSV files and turns them into graphs for simple
reporting.

Chapter 7. Advanced features for storage efficiency 379


The three new graphs display the following information:
򐂰 Workload Categorization report
New workload visuals help you compare activity across tiers within and across pools to
help determine the optimal drive mix for the current workloads. The output is illustrated in
Figure 7-9.

Figure 7-9 STAT Charting Utility Workload Categorization report

򐂰 Daily Movement report


This new Easy Tier summary report runs every 24 hours and illustrates data migration
activity (in 5-minute intervals) that can help you visualize migration types and patterns for
current workloads. The output is illustrated in Figure 7-10 on page 381.

380 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-10 STAT Charting Utility Daily Summary report

򐂰 Workload Skew report


This report shows the skew of all workloads across the system in a graph to help you
visualize and accurately tier configurations when you add capacity or a new system. The
output is illustrated in Figure 7-11.

Figure 7-11 STAT Charting Utility Workload Skew report

The STAT Charting Utility can be downloaded from the IBM Support website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251

Chapter 7. Advanced features for storage efficiency 381


7.2.9 More information
For more information about planning and configuration considerations, best practices, and
monitoring and measurement tools, see IBM System Storage SAN Volume Controller Best
Practices and Performance Guidelines, SG24-7521, and Implementing IBM Easy Tier with
IBM Real-time Compression, TIPS1072.

7.3 Thin provisioning


In a shared storage environment, thin provisioning is a method for optimizing the usage of
available storage. It relies on the allocation of blocks of data on demand versus the traditional
method of allocating all of the blocks up front. This methodology eliminates almost all white
space, which helps avoid the poor usage rates (often as low as 10%) that occur in the
traditional storage allocation method where large pools of storage capacity are allocated to
individual servers but remain unused (not written to).

Thin provisioning presents more storage space to the hosts or servers that are connected to
the storage system than is available on the storage system. The IBM SVC has this capability
for Fibre Channel and iSCSI provisioned volumes.

An example of thin provisioning is when a storage system contains 5000 GB of usable


storage capacity, but the storage administrator mapped volumes of 500 GB each to 15 hosts.
In this example, the storage administrator makes 7500 GB of storage space visible to the
hosts, even though the storage system has only 5000 GB of usable space, as shown in
Figure 7-12. In this case, all 15 hosts cannot use immediately all 500 GBs that are
provisioned to them. The storage administrator must monitor the system and add storage, as
needed.

Figure 7-12 Concept of thin provisioning

382 Implementing the IBM System Storage SAN Volume Controller V7.4
You can imagine thin provisioning as the same process as when airlines sell more tickets on a
flight than physical seats are available, assuming that some passengers do not appear at
check-in. They do not assign actual seats at the time of sale, which avoids each client having
a claim on a specific seat number. The same concept applies to thin provisioning (airline)
SVC (plane) and its volumes (seats). The storage administrator (airline ticketing system) must
closely monitor the allocation process and set correct thresholds.

7.3.1 Configuring a thin-provisioned volume


Volumes can be configured as thin-provisioned or fully allocated. Thin-provisioned volumes
are created with real and virtual capacities. You can still create volumes by using a striped,
sequential, or image mode virtualization policy, as you can with any other volume.

Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the
capacity of the volume that is reported to other SVC components (such as FlashCopy or
remote copy) and to the hosts. For example, you can create a volume with a real capacity of
only 100 GB but a virtual capacity of 1 TB. The actual space that is used by the volume on the
SVC will be 100 GB but hosts will see a 1 TB volume.

A directory maps the virtual address space to the real address space. The directory and the
user data share the real capacity.

Thin-provisioned volumes are available in two operating modes: autoexpand and


non-autoexpand. You can switch the mode at any time. If you select the autoexpand feature,
the SVC automatically adds a fixed amount of more real capacity to the thin volume, as
required. Therefore, the autoexpand feature attempts to maintain a fixed amount of unused
real capacity for the volume. This amount is known as the contingency capacity. The
contingency capacity is initially set to the real capacity that is assigned when the volume is
created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.

A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and the volume must
expand.

Warning threshold: Enable the warning threshold (by using email or a Simple Network
Management Protocol (SNMP) trap) when you are working with thin-provisioned volumes.
You can enable the warning threshold on the volume, and on the storage pool side,
especially when you do not use the autoexpand mode. Otherwise, the thin volume goes
offline if it runs out of space.

Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by
the current virtual capacity, and the contingency capacity is recalculated.

A thin-provisioned volume can be converted non-disruptively to a fully allocated volume, or


vice versa, by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated
copy from the volume after they are synchronized.

The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so


that grains that contain all zeros do not cause any real capacity to be used. Usually, if the
SVC is to detect zeros on the volume, you must use software on the host side to write zeros to
all unused space on the disk or file system.

Chapter 7. Advanced features for storage efficiency 383


Tip: Consider the use of thin-provisioned volumes as targets in FlashCopy relationships.

Space allocation
When a thin-provisioned volume is created, a small amount of the real capacity is used for
initial metadata. Write I/Os to the grains of the thin volume (that were not previously written to)
cause grains of the real capacity to be used to store metadata and user data. Write I/Os to the
grains (that were previously written to) update the grain where data was previously written.

Grain definition: The grain is defined when the volume is created and can be 32 KB,
64 KB, 128 KB, or 256 KB.

Smaller granularities can save more space, but they have larger directories. When you use
thin-provisioning with FlashCopy, specify the same grain size for the thin-provisioned volume
and FlashCopy.

To create a thin-provisioned volume, choose Create Volumes from the Volumes menu in a
dynamic menu and select Thin-Provision, as shown in Figure 7-13. Enter the required
capacity and volume name.

Figure 7-13 Thin-provisioned volume creation

In the Advanced Settings menu of this wizard, you can set virtual and real capacity, warning
thresholds, and grain size, as shown in Figure 7-14 on page 385.

384 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-14 Advanced options

For more information about the configuration procedure for thin-provisioned volumes, see
7.3.1, “Configuring a thin-provisioned volume” on page 383.

7.3.2 Performance considerations


Thin-provisioned volumes require more I/Os because of the directory accesses. For truly
random workloads, a thin-provisioned volume requires approximately one directory I/O for
every user I/O so that performance is 50% of a normal volume. The directory is two-way
write-back cache (similar to the SVC fast-write cache) so that certain applications perform
better.

Thin-provisioned volumes require more CPU processing so that the performance per I/O
Group might be slower. Use the striping policy to spread thin-provisioned volumes across
many storage pools as with normal, generic, fully allocated volumes.

Important: Do not use thin-provisioned volumes where high I/O performance is required.

Thin-provisioned volumes save capacity only if the host server does not write to whole
volumes. Whether the thin-provisioned volume works well partly depends on how the file
system allocated the space. Certain file systems (for example, New Technology File System
[NTFS]) write to the whole volume before they overwrite the deleted files. Other file systems
reuse space in preference to allocating new space.

File system problems can be moderated by tools, such as “defrag,” or by managing storage by
using host Logical Volume Managers (LVM).

Chapter 7. Advanced features for storage efficiency 385


The thin-provisioned volume also depends on how applications use the file system. For
example, some applications delete log files only when the file system is nearly full.

Important: Do not use defrag on thin-provisioned volumes. The defragmentation process


can write data to different areas of a volume, which can cause a thin-provisioned volume to
grow up to its virtual size.

We have no recommendation for thin-provisioned volumes. The performance of


thin-provisioned volumes depends on what is used in the particular environment. For the best
performance, use fully allocated volumes instead of thin-provisioned volumes.

Note: Starting with SVC firmware V7.3, all of the cache subsystem architecture was
redesigned. Now, thin-provisioned volumes can benefit from lower cache functions (such
as coalescing writes or prefetching), which greatly improve performance.

7.3.3 Limitations of virtual capacity


A few factors (extent and grain size) limit the virtual capacity of thin-provisioned volumes
beyond the factors that limit the capacity of regular volumes. Table 7-2 shows the maximum
thin-provisioned volume virtual capacities for an extent size.

Table 7-2 Maximum thin-provisioned volume virtual capacities for an extent size
Extent size in MB Maximum volume real capacity Maximum thin-provisioned
in GB volume virtual capacity in GB

16 2,048 2,000

32 4,096 4,000

64 8,192 8,000

128 16,384 16,000

256 32,768 32,000

512 65,536 65,000

1024 131,072 130,000

2048 262,144 260,000

4096 262,144 262,144

8192 262,144 262,144

Table 7-3 on page 387 shows the maximum thin-provisioned volume virtual capacities for a
grain size.

386 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 7-3 Maximum thin-provisioned volume virtual capacities for a grain size
Grain size in KB Maximum thin-provisioned volume virtual capacity
in GB

32 260,000

64 520,000

128 1,040,000

256 2,080,000

For more information and detailed performance considerations for configuring thin
provisioning, see IBM System Storage SAN Volume Controller Best Practices and
Performance Guidelines, SG24-7521. You can also go to the IBM SAN Volume Controller 7.4
Knowledge Center at this website:
http://www-01.ibm.com/support/knowledgecenter/STPVGU_7.4.0/com.ibm.storage.svc.con
sole.740.doc/svc_ichome_740.html?cp=STPVGU%2F0

7.4 Real-time Compression Software


The IBM Real-time Compression (RtC) Software that is embedded in the SVC solution
addresses the requirements of primary storage data reduction, including performance. It does
so by using a purpose-built technology that is called RtC that uses the Random Access
Compression Engine (RACE). It offers the following benefits:
򐂰 Compression for active primary data
IBM RtC can be used with active primary data. Therefore, it supports workloads that are
not candidates for compression in other solutions. The solution supports online
compression of existing data. Storage administrators can regain free disk space in an
existing storage system without requiring administrators and users to clean up or archive
data. This configuration significantly enhances the value of existing storage assets, and
the benefits to the business are immediate. The capital expense of upgrading or
expanding the storage system is delayed.
򐂰 Compression for replicated or mirrored data
Remote volume copies can be compressed in addition to the volumes at the primary
storage tier. This process reduces storage requirements in Metro Mirror and Global Mirror
destination volumes, as well.
򐂰 No changes to the existing environment required
IBM RtC is part of the storage system. It was designed with the goal of transparency so
that it can be implemented without changes to applications, hosts, networks, fabrics, or
external storage systems. The solution is not apparent to hosts, so users and applications
continue to work as-is. Compression occurs within the SVC system.
򐂰 Overall savings in operational expenses
More data is stored in a rack space, so fewer storage expansion enclosures are required
to store a data set. This reduced rack space has the following benefits:
– Reduced power and cooling requirements. More data is stored in a system, which
requires less power and cooling per gigabyte or used capacity.
– Reduced software licensing for more functions in the system. More data that is stored
per enclosure reduces the overall spending on licensing.

Chapter 7. Advanced features for storage efficiency 387


Tip: Implementing compression in the SVC provides the same benefits to internal
SSDs and externally virtualized storage systems.

򐂰 Disk space savings are immediate


The space reduction occurs when the host writes the data. This process is unlike other
compression solutions in which some or all of the reduction is realized only after a
post-process compression batch job is run.

7.4.1 Common use cases


This section addresses the most common use cases for implementing compression:
򐂰 General-purpose volumes
򐂰 Databases
򐂰 Virtualized infrastructures

General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home
directories, CAD/CAM, oil and gas geoseismic data, and log data. Storing such types of data
in compressed volumes provides immediate capacity reduction to the overall consumed
space. More space can be provided to users without any change to the environment.

Many file types can be stored in general-purpose servers. However, for practical information,
the estimated compression ratios are based on actual field experience. Expected
compression ratios are 50% - 60%.

File systems that contain audio, video files, and compressed files are not good candidates for
compression. The overall capacity savings on these file types are minimal.

Databases
Database information is stored in table space files. High compression ratios are common in
database volumes. Examples of databases that can greatly benefit from RtC are IBM DB2®,
Oracle, and Microsoft SQL Server. Expected compression ratios are 50% - 80%.

Important: Certain databases offer optional built-in compression. Generally, do not


compress already compressed database files.

Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of storage
space, with more virtual server images and backups kept online. The use of compression
reduces the storage requirements at the source.

Examples of virtualization solutions that can greatly benefit from RtC are VMware, Microsoft
Hyper-V, and kernel-based virtual machine (KVM). Expected compression ratios are 45% -
75%.

Tip: Virtual machines (VMs) with file systems that contain compressed files are not good
candidates for compression.

388 Implementing the IBM System Storage SAN Volume Controller V7.4
7.4.2 Real-time Compression concepts
The Random Access Compression Engine (RACE) technology is based on over 50 patents
that are not primarily about compression. Instead, they define how to make industry standard
Lempel-Ziv (LZ) compression of primary storage operate in real time and allow random
access. The primary intellectual property behind this technology is the RACE component.

At a high level, the IBM RACE component compresses data that is written into the storage
system dynamically. This compression occurs transparently, so Fibre Channel and iSCSI
connected hosts are not aware of the compression. RACE is an inline compression
technology, which means that each host write is compressed as it passes through the SVC
software to the disks. This technology has a clear benefit over other compression
technologies that are post-processing based. These technologies do not provide immediate
capacity savings; therefore, they are not a good fit for primary storage workloads, such as
databases and active data set applications.

RACE is based on the Lempel-Ziv lossless data compression algorithm and operates in a
real-time method. When a host sends a write request, the request is acknowledged by the
write cache of the system, and then staged to the storage pool. As part of its staging, the
write request passes through the compression engine and is then stored in compressed
format onto the storage pool. Therefore, writes are acknowledged immediately after they are
received by the write cache with compression occurring as part of the staging to internal or
external physical storage.

Capacity is saved when the data is written by the host because the host writes are smaller
when they are written to the storage pool.

IBM RtC is a self-tuning solution, which is similar to the SVC system. It is adapting to the
workload that runs on the system at any particular moment.

7.4.3 Random Access Compression Engine


To understand why RACE is unique, you need to review the traditional compression
techniques. This description is not about the compression algorithm itself, that is, how the
data structure is reduced in size mathematically. Rather, the description is about how the data
is laid out within the resulting compressed output.

Compression utilities
Compression is probably most known to users because of the widespread use of
compression utilities, such as the zip and gzip utilities. At a high level, these utilities take a file
as their input, and parse the data by using a sliding window technique. Repetitions of data are
detected within the sliding window history, most often 32 KB. Repetitions outside of the
window cannot be referenced. Therefore, the file cannot be reduced in size unless data is
repeated when the window “slides” to the next 32 KB slot.

Figure 7-15 on page 390 shows compression that uses a sliding window, where the first two
repetitions of the string “ABCDEF” fall within the same compression window, and can
therefore be compressed by using the same dictionary. The third repetition of the string falls
outside of this window and therefore cannot be compressed by using the same compression
dictionary as the first two repetitions, reducing the overall achieved compression ratio.

Chapter 7. Advanced features for storage efficiency 389


Figure 7-15 Compression that uses a sliding window

Traditional data compression in storage systems


The traditional approach that is taken to implement data compression in storage systems is
an extension of how compression works in the compression utilities previously mentioned.
Similar to compression utilities, the incoming data is broken into fixed chunks, and then each
chunk is compressed and extracted independently.

However, drawbacks exist to this approach. An update to a chunk requires a read of the
chunk followed by a recompression of the chunk to include the update. The larger the chunk
size chosen, the heavier the I/O penalty to recompress the chunk. If a small chunk size is
chosen, the compression ratio is reduced because the repetition detection potential is
reduced.

Figure 7-16 on page 391 shows an example of how the data is broken into fixed size chunks
(in the upper-left corner of the figure). It also shows how each chunk gets compressed
independently into variable length compressed chunks (in the upper-right side of the figure).
The resulting compressed chunks are stored sequentially in the compressed output.

Although this approach is an evolution from compression utilities, it is limited to low


performance use cases mainly because this approach does not provide real random access
to the data.

390 Implementing the IBM System Storage SAN Volume Controller V7.4
Data
1

3
4

Compressed
Data
1 2
3
4 5
6
7

Figure 7-16 Traditional data compression in storage systems

Random Access Compression Engine


The IBM patented Random Access Compression Engine implements an inverted approach
when compared to traditional approaches to compression. RACE uses variable-size chunks
for the input, and produces fixed-size chunks for the output.

This method enables an efficient and consistent method to index the compressed data
because the data is stored in fixed-size containers.

Figure 7-17 on page 392 shows Random Access Compression.

Chapter 7. Advanced features for storage efficiency 391


Compressed
Data
Data
1 1
2 2

3 3
4 4

5 5

6 6

Compressed
Data
1
2
3
4
5
6

Figure 7-17 Random Access Compression

Location-based compression
Both compression utilities and traditional storage systems compression compress data by
finding repetitions of bytes within the chunk that is being compressed. The compression ratio
of this chunk depends on how many repetitions can be detected within the chunk. The
number of repetitions is affected by how much the bytes stored in the chunk are related to
each other. The relationship between bytes is driven by the format of the object. For example,
an office document might contain textual information, and an embedded drawing, such as this
page. Because the chunking of the file is arbitrary, it has no notion of how the data is laid out
within the document. Therefore, a compressed chunk can be a mixture of the textual
information and part of the drawing. This process yields a lower compression ratio because
the different data types mixed together cause a suboptimal dictionary of repetitions. That is,
fewer repetitions can be detected because a repetition of bytes in a text object is unlikely to be
found in a drawing.

This traditional approach to data compression is also called location-based compression. The
data repetition detection is based on the location of data within the same chunk.

This challenge was also addressed with the predecide mechanism that was introduced in
version 7.1.

392 Implementing the IBM System Storage SAN Volume Controller V7.4
Predecide mechanism
Certain data chunks have a higher compression ratio than others. Compressing some of the
chunks saves little space but still requires resources, such as CPU and memory. To avoid
spending resources on uncompressible data, and to provide the ability to use a different,
more effective (in this particular case) compression algorithm, IBM invented a predecide
mechanism that was first introduced in version 7.1.

The chunks that are below a certain compression ratio are skipped by the compression
engine, therefore saving CPU time and memory processing. Chunks that are decided not to
be compressed with the main compression algorithm, but that still can be compressed well
with another algorithm, will be marked and processed. The result can vary because predecide
does not check the entire block, only a sample of it.

Figure 7-19 on page 394 shows how the detection mechanism works.

Figure 7-18 Detection mechanism

Temporal compression
RACE offers a technology leap, which is called temporal compression, beyond location-based
compression.

When host writes arrive to RACE, they are compressed and fill fixed size chunks that are also
called compressed blocks. Multiple compressed writes can be aggregated into a single
compressed block. A dictionary of the detected repetitions is stored within the compressed
block. When applications write new data or update existing data, the data is typically sent
from the host to the storage system as a series of writes. Because these writes are likely to
originate from the same application and be from the same data type, more repetitions are
usually detected by the compression algorithm.

Chapter 7. Advanced features for storage efficiency 393


This type of data compression is called temporal compression because the data repetition
detection is based on the time that the data was written into the same compressed block.
Temporal compression adds the time dimension that is not available to other compression
algorithms. Temporal compression offers a higher compression ratio because the
compressed data in a block represents a more homogeneous set of input data.

Figure 7-19 shows (in the upper part) how three writes sent one after the other by a host end
up in different chunks. They get compressed in different chunks because their location in the
volume is not adjacent. This approach yields a lower compression ratio because the same
data must be compressed non-natively by using three separate dictionaries. When the same
three writes are sent through RACE (in the lower part of the figure), the writes are
compressed together by using a single dictionary. This approach yields a higher compression
ratio than location-based compression.

1 Location
Compression
Window
2

# = Host write

Temporal
Compression
Window

1 2 3
Time
Figure 7-19 Location-based versus temporal compression

7.4.4 Random Access Compression Engine in the SVC software stack


It is important to understand where the RACE technology is implemented in the SVC software
stack. This location determines how it applies to the SVC components.

394 Implementing the IBM System Storage SAN Volume Controller V7.4
RACE technology is implemented into the SVC thin provisioning layer, and it is an organic
part of the stack. The SVC software stack is shown in Figure 7-20. Compression is
transparently integrated with existing system management design. All of the SVC advanced
features are supported on compressed volumes. You can create, delete, migrate, map
(assign), and unmap (unassign) a compressed volume as though it were a fully allocated
volume. In addition, you can use RtC with Easy Tier on the same volumes. This compression
method provides nondisruptive conversion between compressed and decompressed
volumes. This conversion provides a uniform user experience and eliminates the need for
special procedures when dealing with compressed volumes.

Figure 7-20 RACE integration within the SVC 7.4 software stack

7.4.5 Data write flow


When a host sends a write request to the SVC, it reaches the upper-cache layer. The host is
immediately sent an acknowledgment of its I/Os.

Chapter 7. Advanced features for storage efficiency 395


When the upper cache layer destages to the RACE, the I/Os are sent to the thin-provisioning
layer. They are then sent to the RACE, and if necessary, to the original host write or writes.
The metadata that holds the index of the compressed volume is updated, if needed, and
compressed, as well.

7.4.6 Data read flow


When a host sends a read request to the SVC for compressed data, the request is forwarded
directly to the RtC component:
򐂰 If the RtC component contains the requested data, the SVC cache replies to the host with
the requested data without having to read the data from the lower-level cache or disk.
򐂰 If the RtC component does not contain the requested data, the request is forwarded to the
SVC lower-level cache.
򐂰 If the lower-level cache contains the requested data, it is sent up the stack and returned to
the host without accessing the storage.
򐂰 If the lower-level cache does not contain the requested data, it sends a read request to the
storage for the requested data.

7.4.7 Compression of existing data


In addition to compressing data in real time, you can also compress existing data sets. To do
so, you need to add a compressed mirrored copy of an existing volume by right-clicking a
particular volume and selecting Volume Copy Actions → Add Mirrored Copy, as shown on
Figure 7-21.

Figure 7-21 Adding a volume copy

396 Implementing the IBM System Storage SAN Volume Controller V7.4
Then, you select a compressed type of volume and select a storage pool where you want to
place the new copy (Figure 7-22). If you do not want to move the volume to different storage,
select the same storage pool for the existing, original volume copy.

Figure 7-22 Selecting a compressed volume copy

After the copies are fully synchronized, you can delete the original, uncompressed copy, as
shown on Figure 7-23 on page 398.

Chapter 7. Advanced features for storage efficiency 397


Figure 7-23 Deleting the uncompressed copy

As a result, you compressed data on the existing volume, as shown on Figure 7-24. This
process is nondisruptive, so the data remains online and accessible by applications and
users.

Figure 7-24 Compressed, original volume

This capability enables clients to regain space from the storage pool, which can then be
reused for other applications.

398 Implementing the IBM System Storage SAN Volume Controller V7.4
With the virtualization of external storage systems, the ability to compress already stored data
significantly enhances and accelerates the benefit to users. This capability allows them to see
a tremendous return on their SVC investment. On the initial purchase of an SVC with RtC,
clients can defer their purchase of new storage. When new storage needs to be acquired, IT
purchases less of the required storage before compression.

Important: The SVC reserves some of its resources, such as CPU cores and RAM
memory, after you create one compressed volume or volume copy. This reserve might
affect your system performance if you do not plan for the reserve in advance.

7.4.8 Configuring compressed volumes


Licensing is required to use compression on the SVC. With the SVC, RtC is licensed by
capacity, per terabyte of virtual data.

The configuration is similar to generic volumes and not apparent to users. From the Volumes
menu in the dynamic menu, choose Create Volumes and select Compressed, as shown in
Figure 7-25. Choose the storage pool that you want to use and enter the required capacity
and volume name.

Figure 7-25 Configuring a compressed volume

The summary line at the bottom of the wizard provides information about the allocated
(virtual) capacity and the real capacity that data uses on this volume. In our example, we
defined a 10 GiB volume, but the real capacity is only 204.80 MiB because no data exists
from the host.

When the compressed volume is configured, you can directly map it to the host or map it later
on request.

Chapter 7. Advanced features for storage efficiency 399


7.4.9 SVC 2145-DH8 node software and hardware updates related to Real-time
Compression
The SVC 2145-DH8 node and SVC software version 7.3 introduce significant software and
hardware improvements that enhance and extend the applicability of the RtC feature. In this
section, we provide an overview of these enhancements:
򐂰 Software enhancements
SVC cache rearchitecture
򐂰 Hardware enhancements:
– Additional/enhanced CPU options
– Increased memory options
– Optional Intel Assist Acceleration Technology (Coleto Creek) compression acceleration
cards

7.4.10 Software enhancements


The SVC introduces an enhanced, dual-level caching model. This model differs from the
single-level cache model of previous software versions.

In the previous model, the RtC software component sits below the single-level read/write
cache. The benefit of this model is that the upper-level read/write cache masks from the host
any latency that is introduced by the RtC software component. However, in this single-level
caching model, the destaging of writes for compressed I/Os to disk might not be optimal for
certain workloads because the RtC component is interacting direct with uncached storage.
Figure 7-26 on page 401 depicts compression code in the current SVC software stack.

400 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-26 Real-time Compression code in the SVC software stack

In the new, dual-level caching model, the RtC software component sits below the upper-level
fast write cache and above the lower-level advanced read/write cache. Several advantages
are available for this dual-level model regarding RtC:
򐂰 Host writes, whether to compressed or decompressed volumes, are still serviced directly
through the upper-level write cache, preserving low host write I/O latency. Response time
can improve with this model as the upper cache flushes less data to RACE more
frequently.
򐂰 The performance of the destaging of compressed write I/Os to storage improves because
these I/Os are now destaged through the advanced lower-level cache, as opposed to
directly to storage.
򐂰 The existence of a lower-level write cache below the RtC component in the software stack
allows for the coalescing of compressed writes, and as a result, a reduction in back-end
I/Os due to the ability to perform full-stride writes for compressed data.
򐂰 The existence of a lower-level read cache below the RtC component in the software stack
allows the temporal locality nature of RtC to benefit from pre-fetching from the back-end
storage.
򐂰 The main (lower-level) cache now stores compressed data for compressed volumes,
increasing the effective size of the lower-level cache.
򐂰 Support for larger numbers of compressed volumes is available.

Chapter 7. Advanced features for storage efficiency 401


7.4.11 Hardware updates
The SVC 2145-DH8 node introduces numerous hardware enhancements. Several of these
enhancements relate directly to the RtC feature and offer significant performance and
scalability improvements over previous hardware versions.

Additional/enhanced CPU options


The 2145-DH8 node offers an updated primary CPU that contains eight cores as compared to
the four-core and six-core CPUs available in previous hardware versions. Additionally, the
2145-DH8 node offers the option of a secondary eight-core CPU for use with RtC. This
additional, compression-dedicated CPU allows for improved overall system performance
when using compression over the previous hardware models.

Note: To use the RtC feature on 2145-DH8 nodes, the secondary CPU is required.

Increased memory options


The 2145-DH8 node offers the option to increase the node memory from the base 32 GB to
64 GB, for use with RtC. This additional, compression-dedicated memory allows for improved
overall system performance when using compression over the previous hardware models.

Note: To use the RtC feature on 2145-DH8 nodes, the additional 32 GB memory option is
required.

Optional Intel Quick Assist Acceleration Technology (Coleto Creek)


compression acceleration cards
The 2145-DH8 node offers the option to include one or two Intel Quick Assist compression
acceleration cards that are based on the Coleto Creek chip set. The introduction of these Intel
compression acceleration cards in the SVC 2145-DH8 node is an industry first, providing
dedicated processing power and greater throughput over previous models.

Note: To use the RtC feature on 2145-DH8 nodes, at least one Quick Assist compression
acceleration card is required. With a single card, the maximum number of compressed
volumes per I/O Group is 200. With the addition of a second Quick Assist card, the
maximum number of compressed volumes per I/O Group is 512.

For more information about the compression accelerator cards, see Chapter 3, “Planning and
configuration” on page 73.

7.4.12 Dual RACE component


In the 7.4 SVC version, compression code was further enhanced by the addition of a second
RACE component per SVC node. This feature takes advantage of multi-core controller
architecture and uses the compression accelerator cards more effectively. The dual RACE
component adds the second RACE instance, which works in parallel with the first instance, as
shown on Figure 7-27 on page 403.

402 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 7-27 Dual RACE enhancement

Thanks to dual RACE enhancement, the compression performance can be boosted up to two
times for compressed workloads when compared to earlier SVC code.

To take advantage of dual RACE, several software and hardware requirements must be met:
򐂰 The SVC software must be at level 7.4.
򐂰 Only SVC 2145-DH8 nodes are supported.
򐂰 A second eight-core CPU must be installed per SVC node.
򐂰 An additional 32 GB must be installed per SVC node.
򐂰 At least one Coleto Creek acceleration card must be installed per SVC node. The second
acceleration card is not required.

Note: We recommend using two acceleration cards for the best performance.

When using the dual RACE feature, the acceleration cards are shared between RACE
components, which means that the acceleration cards are used simultaneously by both
RACE components. The rest of resources, such as CPU cores and RAM memory, are evenly
divided between the RACE components. You do not need to manually enable dual RACE;
dual RACE triggers automatically when all minimal software and hardware requirements are
met. If the SVC is compression capable but the minimal requirements for dual RACE are not
met, only one RACE instance is used (as in the previous versions of the SVC code).

Figure 7-28 on page 404 shows how the SVC resources are split when using compression.

Chapter 7. Advanced features for storage efficiency 403


Figure 7-28 SVC CPU cores and RAM memory allocation

For more information about RtC and its deployment in the IBM SVC, see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859.

404 Implementing the IBM System Storage SAN Volume Controller V7.4
8

Chapter 8. Advanced Copy Services


In this chapter, we describe the Advanced Copy Services functions that are available for the
SVC. Most of these functions are also available for the IBM Storwize product family.

In this chapter, we also describe the native IP replication.

In Chapter 9, “SAN Volume Controller operations using the command-line interface” on


page 493, we describe how to use the command-line interface (CLI) and Advanced Copy
Services.

In Chapter 10, “SAN Volume Controller operations using the GUI” on page 655, we explain
how to use the GUI and Advanced Copy Services.

This chapter includes the following topics:


򐂰 FlashCopy
򐂰 Reverse FlashCopy
򐂰 FlashCopy functional overview
򐂰 Implementing the SAN Volume Controller FlashCopy
򐂰 Volume mirroring and migration options
򐂰 Native IP replication
򐂰 Remote Copy
򐂰 Remote Copy commands
򐂰 Troubleshooting remote copy

© Copyright IBM Corp. 2015. All rights reserved. 405


8.1 FlashCopy
By using the FlashCopy function of the SVC, you can perform a point-in-time copy of one or
more volumes. In this section, we describe the inner workings of FlashCopy and provide
details of its configuration and use.

You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and cache and, therefore, the copy is not apparent
to the host.

Important: Because FlashCopy operates at the block level below the host operating
system and cache, those levels do need to be flushed for consistent FlashCopies.

While the FlashCopy operation is performed, the source volume is frozen briefly to initialize
the FlashCopy bitmap and then I/O can resume. Although several FlashCopy options require
the data to be copied from the source to the target in the background, which can take time to
complete, the resulting data on the target volume is presented so that the copy appears to
complete immediately. This process is performed by using a bitmap (or bit array), which tracks
changes to the data after the FlashCopy is started, and an indirection layer, which allows data
to be read from the source volume transparently.

8.1.1 Business requirements for FlashCopy


When you are deciding whether FlashCopy addresses your needs, you must adopt a
combined business and technical view of the problems that you want to solve. First,
determine the needs from a business perspective. Then, determine whether FlashCopy can
address the technical needs of those business requirements.

The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples:
򐂰 Rapidly creating consistent backups of dynamically changing data
򐂰 Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
򐂰 Rapidly creating copies of production data sets for application development and testing
򐂰 Rapidly creating copies of production data sets for auditing purposes and data mining
򐂰 Rapidly creating copies of production data sets for quality assurance

Regardless of your business needs, FlashCopy within the SVC is flexible and offers a broad
feature set, which makes FlshCopy applicable to many scenarios.

8.1.2 Backup improvements with FlashCopy


FlashCopy does not reduce the time that it takes to perform a backup to a traditional backup
infrastructure. However, FlashCopy can be used to minimize and, under certain conditions,
eliminate application downtime that is associated with performing backups or FlashCopy can
transfer the resource usage of performing intensive backups from production systems.

406 Implementing the IBM System Storage SAN Volume Controller V7.4
After the FlashCopy is performed, the resulting image of the data can be backed up to tape,
as though it were the source system. After the copy to tape is complete, the image data is
redundant and the target volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. The use of these
methods puts less load on your infrastructure.

When FlashCopy is used for backup purposes, the target data usually is managed as
read-only at the operating system level. This approach provides extra security by ensuring
that your target data was not modified and remains true to the source.

8.1.3 Restore with FlashCopy


FlashCopy can perform a restore from any existing FlashCopy mapping. Therefore, you can
restore (or copy) from the target to the source of your regular FlashCopy relationships. (It
might be easier to think of this method as reversing the direction of the FlashCopy mappings.)
This capability has the following benefits:
򐂰 You do not need to worry about pairing mistakes; you trigger a restore.
򐂰 The process appears instantaneous.
򐂰 You can maintain a pristine image of your data while you are restoring the data that was
the primary data.

This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.

Preferred practices: Although restoring from a FlashCopy is quicker than a traditional


tape media restore, you must not use restoring from a FlashCopy as a substitute for good
archiving practices. Instead, keep one to several iterations of your FlashCopies so that you
can near-instantly recover your data from the most recent history and keep your long-term
archive for your business.

In addition to the restore option, which copies the original blocks from the target volume to
modified blocks on the source volume, the target can be used to perform a restore of
individual files; you make the target available on a host. We suggest that you do not make the
target available to the source host because seeing doubles of disks causes problems for most
host operating systems. Copy the files to the source through the normal host data copy
methods for your environment.

8.1.4 Moving and migrating data with FlashCopy


FlashCopy can be used to facilitate the movement or migration of data between hosts while
minimizing downtime for applications. By using FlashCopy, application data can be copied
from source volumes to new target volumes while applications remain online. After the
volumes are fully copied and synchronized, the application can be brought down and then
immediately brought back up on the new server that is accessing the new FlashCopy target
volumes.

This method differs from the other migration methods, which are described later in this
chapter. Common uses for this capability are host and back-end storage hardware refreshes.

Chapter 8. Advanced Copy Services 407


8.1.5 Application testing with FlashCopy
Testing a new version of an application or operating system that is using actual production
data is important. This testing ensures the highest quality possible for your environment.
FlashCopy makes this type of testing easy to accomplish without putting the production data
at risk or requiring downtime to create a constant copy. You create a FlashCopy of your
source and use that copy for your testing. This copy is a duplicate of your production data
down to the block level so that even physical disk identifiers are copied. Therefore, it is
impossible for your applications to tell the difference.

8.1.6 Host and application considerations to ensure FlashCopy integrity


Because FlashCopy is at the block level, you must understand the interaction between your
application and the host operating system. From a logical standpoint, think of these objects as
“layers” that sit on top of one another. The application is the topmost layer, and beneath it is
the operating system layer. Both of these layers have various levels and methods of caching
data to provide better speed. Because the SVC and, therefore, FlashCopy sit below these
layers, they are unaware of the cache at the application or operating system layers.

To ensure the integrity of the copy that is made, it is necessary to flush the host operating
system and application cache for any outstanding reads or writes before the FlashCopy
operation is performed. Failing to flush the host operating system and application cache
produces what is referred to as a crash consistent copy. The resulting copy requires the same
type of recovery procedure, such as log replay and file system checks, that is required
following a host crash. FlashCopies that are crash consistent often can be used following file
system and application recovery procedures.

Note: Although the best way to perform FlashCopy is to flush host cache first, certain
companies, such as Oracle, support using snapshots without it, as stated in Metalink note
604683.1.

Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare for a FlashCopy operation. When this type of facility is unavailable, the host
cache must be flushed manually by quiescing the application and unmounting the file system
or drives.

Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that application’s data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases because the database maintains strict control over I/O. This method is opposed
to flushing data from both the application and the backing database (which is the
recommended method because it is safer). However, this method can be used when
facilities do not exist or your environment includes time sensitivity.

8.1.7 FlashCopy attributes


The FlashCopy function in the SVC features the following attributes:
򐂰 The target is the time-zero copy of the source, which is known as the FlashCopy mapping
target.
򐂰 FlashCopy produces an exact copy of the source volume, including any metadata that was
written by the host operating system, Logical Volume Manager (LVM), and applications.

408 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 The source volume and target volume are available (almost) immediately following the
FlashCopy operation.
򐂰 The source and target volumes must be the same “virtual” size.
򐂰 The source and target volumes must be on the same SVC clustered system.
򐂰 The source and target volumes do not need to be in the same I/O Group or storage pool.
򐂰 The storage pool extent sizes can differ between the source and target.
򐂰 The source volumes can have up to 256 target volumes (Multiple Target FlashCopy).
򐂰 The target volumes can be the source volumes for other FlashCopy relationships
(cascaded FlashCopy).
򐂰 Consistency Groups are supported to enable FlashCopy across multiple volumes in the
same time.
򐂰 Up to 255 FlashCopy Consistency Groups are supported per system.
򐂰 Up to 512 FlashCopy mappings can be placed in one Consistency Group.
򐂰 The target volume can be updated independently of the source volume.
򐂰 Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both
nodes of the SVC I/O Group to prevent a single point of failure.
򐂰 FlashCopy mapping and Consistency Groups can be automatically withdrawn after the
completion of the background copy.
򐂰 Thin-provisioned FlashCopy (or Snapshot in the GUI) use disk space only when updates
are made to the source or target data and not for the entire capacity of a volume copy.
򐂰 FlashCopy licensing is based on the virtual capacity of the source volumes.
򐂰 Incremental FlashCopy copies all of the data when you first start FlashCopy and then only
the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy
can substantially reduce the time that is required to re-create an independent image.
򐂰 Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original
copy operation to complete.
򐂰 The maximum number of supported FlashCopy mappings is 4096 per SVC system.
򐂰 The size of the source and target volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.

8.2 Reverse FlashCopy


Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original copy
operation to complete. It supports multiple targets (up to 256) and therefore multiple rollback
points.

A key advantage of the SVC Multiple Target Reverse FlashCopy function is that the reverse
FlashCopy does not destroy the original target, which allows processes by using the target,
such as a tape backup, to continue uninterrupted.

The SVC also provides the ability to create an optional copy of the source volume to be made
before the reverse copy operation starts. This ability to restore back to the original source
data can be useful for diagnostic purposes.

Chapter 8. Advanced Copy Services 409


Complete the following steps to restore from an on-disk backup:
1. Optional: Create a target volume (volume Z) and use FlashCopy to copy the production
volume (volume X) onto the new target for later problem analysis.
2. Create a FlashCopy map with the backup to be restored (volume Y) or (volume W) as the
source volume and volume X as the target volume, if this map does not exist.
3. Start the FlashCopy map (volume Y → volume X) with the -restore option to copy the
backup data onto the production disk. If the -restore option is specified and no
FlashCopy mapping exists, the command is ignored, which preserves your data integrity.

The production disk is instantly available with the backup data. Figure 8-1 shows an example
of Reverse FlashCopy.

Figure 8-1 Reverse FlashCopy

Regardless of whether the initial FlashCopy map (volume X → volume Y) is incremental, the
Reverse FlashCopy operation copies the modified data only.

Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and
adding them to a new reverse Consistency Group. Consistency Groups cannot contain more
than one FlashCopy map with the same target volume.

8.2.1 FlashCopy and Tivoli Storage FlashCopy Manager


The management of many large FlashCopy relationships and Consistency Groups is a
complex task without a form of automation for assistance.

IBM Tivoli Storage FlashCopy Manager provides fast application-aware backups and restores
using advanced point-in-time image technologies in the SVC. In addition, it provides an
optional integration with IBM Tivoli Storage Manager for the long-term storage of snapshots.

410 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-2 shows the integration of Tivoli Storage Manager and FlashCopy Manager from a
conceptual level.

Figure 8-2 Tivoli Storage Manager for Advanced Copy Services features

Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for
Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli
FlashCopy Manager, you can coordinate and automate host preparation steps before you
issue FlashCopy start commands to ensure that a consistent backup of the application is
made. You can put databases into hot backup mode and flush the file system cache before
starting the FlashCopy.

FlashCopy Manager also allows for easier management of on-disk backups that use
FlashCopy, and provides a simple interface to perform the “reverse” operation.

Figure 8-3 on page 412 shows the FlashCopy Manager feature.

Chapter 8. Advanced Copy Services 411


Figure 8-3 Tivoli Storage Manager FlashCopy Manager features

Released December 2013, IBM Tivoli FlashCopy Manager V4.1 adds support for VMware 5.5
and vSphere environments with Site Recovery Manager (SRM), with instant restore for
VMware Virtual Machine File System (VMFS) data stores. This release also integrates with
IBM Tivoli Storage Manager for Virtual Environments, and it allows backup of point-in-time
images into the Tivoli Storage Manager infrastructure for long-term storage.

The addition of VMware vSphere brings support and application awareness for FlashCopy
Manager to the following applications:
򐂰 Microsoft Exchange and Microsoft SQL Server, including SQL Server 2012 Availability
Groups
򐂰 IBM DB2 and Oracle databases, for use either with or without SAP environments
򐂰 IBM General Parallel File System (GPFS) software snapshots for DB2 pureScale®
򐂰 Other applications supported through script customizing

For more information about IBM Tivoli FlashCopy Manager, see this website:
http://www.ibm.com/software/products/en/tivostorflasmana/

FlashCopy Manager integration with Remote Copy Services


One of the interesting features of FlashCopy Manager is the capability of creating flash copy
backups from remote copy target volumes. Therefore, the backup does not have to be copied
from the primary site to the secondary site because it is already copied through Metro Mirror
(MM) or Global Mirror (GM). An application that is running in the primary site can be backed
up in the secondary site, and the source of this backup can be the target remote copy
volumes. This concept is presented in Figure 8-4 on page 413.

412 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-4 FlashCopy Manager integration with remote copy services

8.3 FlashCopy functional overview


FlashCopy works by defining a FlashCopy mapping that consists of one source volume with
one target volume. Multiple FlashCopy mappings (source-to-target relationships) can be
defined, and point-in-time consistency can be maintained across multiple individual mappings
by using Consistency Groups. For more information, see “Consistency Group with Multiple
Target FlashCopy” on page 417.

Before you start a FlashCopy (regardless of the type and specified options), you must issue a
prestartfcmap or prestartfcconsistgrp, which puts the SVC cache into write-through mode
and flushes the I/O that is currently bound for your volume. After FlashCopy is started, an
effective copy of a source volume to a target volume is created. The content of the source
volume is presented immediately on the target volume, and the original content of the target
volume is lost. This FlashCopy operation is also referred to as a time-zero copy (T0).

Note: Instead of using prestartfcmap or prestartfcconsistgrp, you can also use the
-prep parameter in the startfcmap or startfcconsistgrp command to prepare and start
FlashCopy in one step.

The source and target volumes are available for use immediately after the FlashCopy
operation. The FlashCopy operation creates a bitmap that is referenced and maintained to
direct I/O requests within the source and target relationship. This bitmap is updated to reflect
the active block locations while data is copied in the background from the source to the target
and updates are made to the source.

Chapter 8. Advanced Copy Services 413


For more information about background copy, see 8.4.5, “Grains and the FlashCopy bitmap”
on page 419. Figure 8-5 shows the redirection of the host I/O toward the source volume and
the target volume.

Figure 8-5 Redirection of host I/O

8.4 Implementing the SAN Volume Controller FlashCopy


In this section, we describe how FlashCopy is implemented in the SVC.

8.4.1 FlashCopy mappings


FlashCopy occurs between a source volume and a target volume. The source and target
volumes must be the same size. The minimum granularity that the SVC supports for
FlashCopy is an entire volume; it is not possible to use FlashCopy to copy only part of a
volume.

Important: As with any point-in-time copy technology, you are bound by operating system
and application requirements for interdependent data and the restriction to an entire
volume.

The source and target volumes must belong to the same SVC system, but they do not have to
be in the same I/O Group or storage pool. FlashCopy associates a source volume to a target
volume through FlashCopy mapping.

To become members of a FlashCopy mapping, the source and target volumes must be the
same size. Volumes that are members of a FlashCopy mapping cannot have their size
increased or decreased while they are members of the FlashCopy mapping.

A FlashCopy mapping is the act of creating a relationship between a source volume and a
target volume. FlashCopy mappings can be stand-alone or a member of a Consistency
Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a
stand-alone mapping or a Consistency Group.

414 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-6 shows the concept of FlashCopy mapping.

Figure 8-6 FlashCopy mapping

8.4.2 Multiple Target FlashCopy


The SVC supports up to 256 target volumes from a single source volume. Each copy is
managed by a unique mapping. Figure 8-7 shows the Multiple Target FlashCopy
implementation.

Important: Older copies must be complete for independent FlashCopy mappings.

Figure 8-7 Multiple Target FlashCopy implementation

Figure 8-7 also shows four targets and mappings that are taken from a single source, with
their interdependencies. In this example, Target 1 is the oldest (as measured from the time
that it was started) through to Target 4, which is the newest. The ordering is important
because of how the data is copied when multiple target volumes are defined and because of
the dependency chain that results.

A write to the source volume does not cause its data to be copied to all of the targets. Instead,
it is copied to the newest target volume only (Target 4 in Figure 8-7). The older targets refer to
new targets first before referring to the source.

From the point of view of an intermediate target disk (neither the oldest nor the newest), it
treats the set of newer target volumes and the true source volume as a type of composite
source.

Chapter 8. Advanced Copy Services 415


It treats all older volumes as a kind of target (and behaves like a source to them). If the
mapping for an intermediate target volume shows 100% progress, its target volume contains
a complete set of data. In this case, mappings treat the set of newer target volumes (up to and
including the 100% progress target) as a form of composite source. A dependency
relationship exists between a particular target and all newer targets (up to and including a
target that shows 100% progress) that share the source until all data is copied to this target
and all older targets.

For more information about Multiple Target FlashCopy, see 8.4.6, “Interaction and
dependency between multiple target FlashCopy mappings” on page 420.

8.4.3 Consistency Groups


Consistency Groups address the requirement to preserve point-in-time data consistency
across multiple volumes for applications that include related data that spans multiple
volumes. For these volumes, Consistency Groups maintain the integrity of the FlashCopy by
ensuring that “dependent writes” are run in the application’s intended sequence.

When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs the operation on all FlashCopy mappings that are
contained within the Consistency Group at the same time.

Figure 8-8 shows a Consistency Group that includes two FlashCopy mappings.

Figure 8-8 FlashCopy Consistency Group

Important: After an individual FlashCopy mapping is added to a Consistency Group, it can


be managed as part of the group only. Operations, such as prepare, start, and stop, are no
longer allowed on the individual mapping.

416 Implementing the IBM System Storage SAN Volume Controller V7.4
Dependent writes
To show why it is crucial to use Consistency Groups when a data set spans multiple volumes,
consider the following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to perform the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.

The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur before the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes 1 and 3 but not 2 because the FlashCopy of the
database volume occurred before the write was completed.

In this case, if the database was restarted by using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the volume
with the database file was started (the bitmap was created) before the write completed to the
volume. Therefore, the transaction is lost and the integrity of the database is in question.

To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, a FlashCopy operation must be performed on multiple volumes as an
atomic operation. To accomplish this method, the SVC supports the concept of Consistency
Groups.

A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings. The maximum
number of FlashCopy mappings that is supported by the SVC system V7.4 is 4,096.
FlashCopy commands can then be issued to the FlashCopy Consistency Group and,
therefore, simultaneously for all of the FlashCopy mappings that are defined in the
Consistency Group.

For example, when a FlashCopy start command is issued to the Consistency Group, all of
the FlashCopy mappings in the Consistency Group are started at the same time. This
simultaneous start results in a point-in-time copy that is consistent across all of the FlashCopy
mappings that are contained in the Consistency Group.

Consistency Group with Multiple Target FlashCopy


A Consistency Group aggregates FlashCopy mappings, not volumes. Therefore, where a
source volume has multiple FlashCopy mappings, they can be in the same or separate
Consistency Groups.

If a particular volume is the source volume for multiple FlashCopy mappings, you might want
to create separate Consistency Groups to separate each mapping of the same source
volume. Regardless of whether the source volume with multiple target volumes is in the same
Consistency Group or in separate Consistency Groups, the resulting FlashCopy produces
multiple identical copies of the source data.

Maximum configurations
Table 8-1 on page 418 lists the FlashCopy properties and maximum configurations.

Chapter 8. Advanced Copy Services 417


Table 8-1 FlashCopy properties and maximum configurations
FlashCopy property Maximum Comment

FlashCopy targets per source 256 This maximum is the number of FlashCopy
mappings that can exist with the same source
volume.

FlashCopy mappings per system 4,096 The number of mappings is no longer limited by
the number of volumes in the system, so the
FlashCopy component limit applies.

FlashCopy Consistency Groups 255 This maximum is an arbitrary limit that is policed
per system by the software.

FlashCopy volume capacity per 1,024 TB This maximum is a limit on the quantity of
I/O Group FlashCopy mappings that are using bitmap space
from this I/O Group. This maximum configuration
uses all 512 MB of bitmap space for the I/O Group
and allows no MM and GM bitmap space. The
default is 40 TB.

FlashCopy mappings per 512 This limit is because of the time that is taken to
Consistency Group prepare a Consistency Group with many
mappings.

8.4.4 FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to the source and target volumes when a
FlashCopy mapping is started, which is done by using a FlashCopy bitmap. The purpose of
the FlashCopy indirection layer is to enable the source and target volumes for read and write
I/O immediately after the FlashCopy is started.

To show how the FlashCopy indirection layer works, we examine what happens when a
FlashCopy mapping is prepared and then started.

When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush the write cache to the source volume or volumes that are part of a Consistency
Group.
2. Put cache into write-through mode on the source volumes.
3. Discard cache for the target volumes.
4. Establish a sync point on all of the source volumes in the Consistency Group (which
creates the FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source volumes and target
volumes.
6. Enable cache on the source volumes and target volumes.

FlashCopy provides the semantics of a point-in-time copy by using the indirection layer, which
intercepts I/O that is directed at the source or target volumes. The act of starting a FlashCopy
mapping causes this indirection layer to become active in the I/O path, which occurs
automatically across all FlashCopy mappings in the Consistency Group. The indirection layer
then determines how each I/O is to be routed, which is based on the following factors:
򐂰 The volume and the logical block address (LBA) to which the I/O is addressed
򐂰 Its direction (read or write)
򐂰 The state of an internal data structure, the FlashCopy bitmap

418 Implementing the IBM System Storage SAN Volume Controller V7.4
The indirection layer allows the I/O to go through to the underlying volume, redirects the I/O
from the target volume to the source volume, or queues the I/O while it arranges for data to be
copied from the source volume to the target volume. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.

8.4.5 Grains and the FlashCopy bitmap


When data is copied between volumes, it is copied in units of address space that are known
as grains. Grains are units of data that are grouped to optimize the use of the bitmap that
tracks changes to the data between the source and target volume. You can use 64 KB or
256 KB grain sizes (256 KB is the default). The FlashCopy bitmap contains 1 bit for each
grain and used to track whether the source grain was copied to the target. The 64 KB grain
size uses bitmap space at a rate of four times the default 256 KB size.

The FlashCopy bitmap dictates read and write behavior for the source and target volumes.

Source reads
Reads are performed from the source volume, which is the same for non-FlashCopy volumes.

Source writes
Writes to the source cause the following results:
򐂰 If the grain was not copied to the target yet, the grain is copied before the actual write is
performed to the source. The bitmap is updated, which indicates that this grain is already
copied to the target.
򐂰 If the grain was already copied, the write is performed to the source as usual.

Target reads
Reads are performed from the target if the grain was copied. Otherwise, the read is
performed from the source and no copy is performed.

Target writes
Writes to the target cause the following results:
򐂰 If the grain was not copied from the source to the target, the grain is copied from the
source to the target before the actual write is performed to the source. The bitmap is
updated, which indicates that this grain is already copied to the target.
򐂰 If the entire grain is being updated on the target, the target is marked as split with the
source (if no I/O error occurs during the write) and the write goes directly to the target.
򐂰 If the grain in question was already copied from the source to the target, the write goes
directly to the target.

The FlashCopy indirection layer algorithm


Imagine the FlashCopy indirection layer as the I/O traffic director when a FlashCopy mapping
is active. The I/O is intercepted and handled according to whether it is directed at the source
volume or at the target volume, depending on the nature of the I/O (read or write) and the
state of the grain (whether it was copied).

Figure 8-9 on page 420 shows how the background copy runs while I/Os are handled
according to the indirection layer algorithm.

Chapter 8. Advanced Copy Services 419


Figure 8-9 I/O processing with FlashCopy

8.4.6 Interaction and dependency between multiple target FlashCopy


mappings
Figure 8-10 shows a set of four FlashCopy mappings that share a common source. The
FlashCopy mappings target volumes Target 0, Target 1, Target 2, and Target 3.

Figure 8-10 Interactions among multiple target FlashCopy mappings

420 Implementing the IBM System Storage SAN Volume Controller V7.4
Target 0 is not dependent on a source because it completed copying. Target 0 has two
dependent mappings (Target 1 and Target 2).

Target 1 depends on Target 0. It remains dependent until all of Target 1 is copied. Target 2
depends on it because Target 2 is 20% copy complete. After all of Target 1 is copied, it can
then move to the idle_copied state.

Target 2 is dependent upon Target 0 and Target 1 and remains dependent until all of Target 2
is copied. No target depends on Target 2; therefore, when all of the data is copied to Target 2,
it can move to the idle_copied state.

Target 3 completed copying, so it is not dependent on any other maps.

Target writes with multiple target FlashCopy


A write to an intermediate or the newest target volume must consider the state of the grain
within its own mapping, and the state of the grain of the next oldest mapping.

If the grain of the next oldest mapping is not yet copied, it must be copied before the write can
proceed to preserve the contents of the next oldest mapping. The data that is written to the
next oldest mapping comes from a target or source.

If the grain in the target that is being written is not yet copied, the grain is copied from the
oldest copied grain in the mappings that are newer than the target or the source if none are
copied. After this copy is done, the write can be applied to the target.

Target reads with multiple target FlashCopy


If the grain being read is copied from the source to the target, the read returns data from the
target that is being read. If the grain is not yet copied, each of the newer mappings is
examined in turn and the read is performed from the first copy that is found. If none are found,
the read is performed from the source.

Stopping the copy process


When a stop command is issued to a mapping that contains a target that has dependent
mappings, the mapping enters the stopping state and begins copying all grains that are
uniquely held on the target volume of the mapping that is being stopped to the next oldest
mapping that is in the copying state. The mapping remains in the stopping state until all grains
are copied and then enters the stopped state.

Note: The stopping copy process can be ongoing for several mappings that share the
source at the same time. At the completion of this process, the mapping automatically
makes an asynchronous state transition to the stopped state or the idle_copied state if the
mapping was in the copying state with progress = 100%.

For example, if the mapping that is associated with Target 0 was issued a stopfcmap
command or a stopfcconsistgrp command, Target 0 enters the stopping state while a
process copies the data of Target 0 to Target 1. After all of the data is copied, Target 0 enters
the stopped state and Target 1 is no longer dependent upon Target 0; however, Target 1
remains dependent on Target 2.

8.4.7 Summary of the FlashCopy indirection layer algorithm


Table 8-2 on page 422 summarizes the indirection layer algorithm.

Chapter 8. Advanced Copy Services 421


Table 8-2 Summary table of the FlashCopy indirection layer algorithm
Accessed Was the grain Host I/O operation
volume copied?
Read Write

Source No Read from the source Copy grain to most recently


volume. started target for this source,
then write to the source.

Yes Read from the source Write to the source volume.


volume.

Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.

Yes Read from the target volume. Write to the target volume.

8.4.8 Interaction with the cache


Starting with Storwize firmware V7.3, the entire cache subsystem was redesigned and
changed. Cache is divided into upper cache and lower cache. Upper cache serves mostly as
write cache and hides the write latency from the hosts and application. Lower cache is a
read/write cache and optimizes I/O to and from disks. Figure 8-11 shows the new Storwize
cache architecture.

Figure 8-11 New cache architecture

422 Implementing the IBM System Storage SAN Volume Controller V7.4
This copy-on-write process introduces significant latency into write operations. To isolate the
active application from this additional latency, the FlashCopy indirection layer is placed
logically between upper and lower cache. Therefore, the additional latency that is introduced
by the copy-on-write process is encountered only by the internal cache operations and not by
the application.

The logical placement of the FlashCopy indirection layer is shown in Figure 8-12.

Figure 8-12 Logical placement of the FlashCopy indirection layer

Introduction of the two-level cache provides additional performance improvements to the


FlashCopy mechanism. Because the FlashCopy layer is above the lower cache in the
Storwize software stack now, it can benefit from read prefetching and coalescing writes to
back-end storage. Also, preparing FlashCopy is much faster because upper-cache write data
does not need to go directly to back-end storage but to the lower-cache layer. Additionally, in
the multitarget FlashCopy, the target volumes of the same image share the same cache data.
This design is opposite to previous Storwize code versions where each volume had its own
copy of cached data.

8.4.9 FlashCopy and image mode volumes


FlashCopy can be used with image mode volumes. Because the source and target volumes
must be the same size, you must create a target volume with the same size as the image
mode volume when you are creating a FlashCopy mapping. To accomplish this task, use the
Storwizeinfo lsvdisk -bytes volumeName command. The size in bytes is then used to
create the volume that is used in the FlashCopy mapping. This method provides an exact
number of bytes because image mode volumes might not line up one-to-one on other
measurement unit boundaries. In Example 8-1 on page 424, we list the size of the
ds_3400_img_vol volume. The ds_3400_img_vol_copy volume is then created, which
specifies the same size.

Chapter 8. Advanced Copy Services 423


Example 8-1 Listing the size of a volume in bytes and creating a volume of equal size
IBM_2145:ITSO_Storwize2:superuser>lsvdisk -bytes ds_3400_img_vol
id 22
name ds_3400_img_vol
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 5
mdisk_grp_name MigrationPool_1024
capacity 7516192768
type image
formatted no
mdisk_id 15
mdisk_name mdisk15
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801FD80284000000000000038
throttling 0
preferred_node_id 2
fast_write_state not_empty
cache readwrite
udid
fc_map_count 0
sync_rate 80
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 5
parent_mdisk_grp_name MigrationPool_1024

IBM_2145:ITSO_Storwize2:superuser>mkvdisk -iogrp 0 -mdiskgrp test_pool_01 -size


7516192768 -unit b -name ds_3400_img_vol_copy
Virtual Disk, id [23], successfully created

IBM_2145:ITSO_Storwize2:superuser>lsvdisk -delim " "


22 ds_3400_img_vol 0 io_grp0 online 5 MigrationPool_1024 7.00GB image
6005076801FD80284000000000000038 0 1 not_empty 0 no 0 5 MigrationPool_1024
23 ds_3400_img_vol_copy 0 io_grp0 online 1 test_pool_01 7.00GB striped
6005076801FD80284000000000000039 0 1 not_empty 0 no 0 1 test_pool_01

Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume
commands to modify the size of the volume.

These actions must be performed before a mapping is created.

You can use an image mode volume as a FlashCopy source volume or target volume.

424 Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.10 FlashCopy mapping events
In this section, we describe the events that modify the states of a FlashCopy. We also
describe the mapping events that are listed in Table 8-3.

Overview of a FlashCopy sequence of events: The following tasks show the FlashCopy
sequence:
1. Associate the source data set with a target location (one or more source and target
volumes).
2. Create a FlashCopy mapping for each source volume to the corresponding target
volume. The target volume must be equal in size to the source volume.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.

Table 8-3 Mapping events


Mapping event Description

Create A FlashCopy mapping is created between the specified source volume


and the specified target volume. The operation fails if any one of the
following conditions is true:
򐂰 The source volume is a member of 256 FlashCopy mappings.
򐂰 The node has insufficient bitmap memory.
򐂰 The source and target volumes are different sizes.

Prepare The prestartfcmap or prestartfcconsistgrp command is directed to a


Consistency Group for FlashCopy mappings that are members of a
normal Consistency Group or to the mapping name for FlashCopy
mappings that are stand-alone mappings. The prestartfcmap or
prestartfcconsistgrp command places the FlashCopy mapping into
the preparing state.

The prestartfcmap or prestartfcconsistgrp command can corrupt any


data that was on the target volume because cached writes are discarded.
Even if the FlashCopy mapping is never started, the data from the target
might be changed logically during the act of preparing to start the
FlashCopy mapping.

Flush done The FlashCopy mapping automatically moves from the preparing state to
the prepared state after all cached data for the source is flushed and all
cached data for the target is no longer valid.

Chapter 8. Advanced Copy Services 425


Mapping event Description

Start When all of the FlashCopy mappings in a Consistency Group are in the
prepared state, the FlashCopy mappings can be started. To preserve the
cross-volume Consistency Group, the start of all of the FlashCopy
mappings in the Consistency Group must be synchronized correctly
concerning I/Os that are directed at the volumes by using the startfcmap
or startfcconsistgrp command.

The following actions occur during the running of the startfcmap


command or the startfcconsistgrp command:
򐂰 New reads and writes to all source volumes in the Consistency
Group are paused in the cache layer until all ongoing reads and
writes beneath the cache layer are completed.
򐂰 After all FlashCopy mappings in the Consistency Group are paused,
the internal cluster state is set to allow FlashCopy operations.
򐂰 After the cluster state is set for all FlashCopy mappings in the
Consistency Group, read and write operations continue on the
source volumes.
򐂰 The target volumes are brought online.
As part of the startfcmap or startfcconsistgrp command, read and
write caching is enabled for the source and target volumes.

Modify The following FlashCopy mapping properties can be modified:


򐂰 FlashCopy mapping name
򐂰 Clean rate
򐂰 Consistency Group
򐂰 Copy rate (for background copy or stopping copy priority)
򐂰 Automatic deletion of the mapping when the background copy is
complete

Stop The following separate mechanisms can be used to stop a FlashCopy


mapping:
򐂰 Issue a command.
򐂰 An I/O error occurred.

Delete This command requests that the specified FlashCopy mapping is


deleted. If the FlashCopy mapping is in the copying state, the force flag
must be used.

Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.

Copy complete After all of the source data is copied to the target and no dependent
mappings exist, the state is set to copied. If the option to automatically
delete the mapping after the background copy completes is specified, the
FlashCopy mapping is deleted automatically. If this option is not
specified, the FlashCopy mapping is not deleted automatically and it can
be reactivated by preparing and starting again.

Bitmap online/offline The node failed.

426 Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.11 FlashCopy mapping states
In this section, we describe the states of a FlashCopy mapping.

Idle_or_copied
The source and target volumes act as independent volumes even if a mapping exists between
the two. Read and write caching is enabled for the source and the target volumes.

If the mapping is incremental and the background copy is complete, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in
the I/O Group that the mapping is assigned to is lost, the source and target volumes are
offline.

Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.

Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the SCSI
front end as a hardware error. If the mapping is incremental and a previous mapping is
completed, the mapping records the differences between the source and target volumes only.
If the connection to both nodes in the I/O Group that the mapping is assigned to is lost, the
source and target volumes go offline.

Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error. Any
changed write data for the source volume is flushed from the cache. Any read or write data for
the target volume is discarded from the cache. If the mapping is incremental and a previous
mapping is completed, the mapping records the differences between the source and target
volumes only. If the connection to both nodes in the I/O Group that the mapping is assigned to
is lost, the source and target volumes go offline.

Performing the cache flush that is required as part of the startfcmap or startfcconsistgrp
command causes I/Os to be delayed while they are waiting for the cache flush to complete. To
overcome this problem, SVC FlashCopy supports the prestartfcmap or
prestartfcconsistgrp command, which prepares for a FlashCopy start while still allowing
I/Os to continue to the source volume.

In the preparing state, the FlashCopy mapping is prepared by completing the following steps:
1. Flushing any modified write data that is associated with the source volume from the cache.
Read data for the source is left in the cache.
2. Placing the cache for the source volume into write-through mode so that subsequent
writes wait until data is written to disk before the write command that is received from the
host is complete.
3. Discarding any read or write data that is associated with the target volume from the cache.

Chapter 8. Advanced Copy Services 427


Stopped
The mapping is stopped because you issued a stop command or an I/O error occurred. The
target volume is offline and its data is lost. To access the target volume, you must restart or
delete the mapping. The source volume is accessible and the read and write cache is
enabled. If the mapping is incremental, the mapping is recording write operations to the
source volume. If the connection to both nodes in the I/O Group that the mapping is assigned
to is lost, the source and target volumes go offline.

Stopping
The mapping is copying data to another mapping.

If the background copy process is complete, the target volume is online while the stopping
copy process completes.

If the background copy process is not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs.

The source volume is accessible for I/O operations.

Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
the source and target volume to go offline. When access to the metadata is restored, the
mapping returns to the copying or stopping state and the source and target volumes return
online. The background copy process resumes. Any data that was not flushed and was
written to the source or target volume before the suspension is in cache until the mapping
leaves the suspended state.

Summary of FlashCopy mapping states


Table 8-4 lists the various FlashCopy mapping states and the corresponding states of the
source and target volumes.

Table 8-4 FlashCopy mapping state summary


State Source Target

Online/Offline Cache state Online/Offline Cache state

Idling/Copied Online Write-back Online Write-back

Copying Online Write-back Online Write-back

Stopped Online Write-back Offline N/A

Stopping Online Write-back Online if copy N/A


complete

Offline if copy
incomplete

Suspended Offline Write-back Offline N/A

Preparing Online Write-through Online but not N/A


accessible

Prepared Online Write-through Online but not N/A


accessible

428 Implementing the IBM System Storage SAN Volume Controller V7.4
8.4.12 Thin-provisioned FlashCopy
FlashCopy source and target volumes can be thin-provisioned.

Source or target thin-provisioned


The most common configuration is a fully allocated source and a thin-provisioned target. By
using this configuration, the target uses a smaller amount of real storage than the source.
With this configuration, use the NOCOPY (background copy rate = 0%) option only. Although
the COPY option is supported, this option creates a fully allocated target, which defeats the
purpose of thin provisioning.

Source and target thin-provisioned


When the source and target volumes are thin-provisioned, only the data that is allocated to
the source is copied to the target. In this configuration, the background copy option has no
effect.

Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.

Thin-provisioned incremental FlashCopy


The implementation of thin-provisioned volumes does not preclude the use of incremental
FlashCopy on the same volumes. It does not make sense to have a fully allocated source
volume and then use incremental FlashCopy (which is always a full copy the first time) to copy
this fully allocated source volume to a thin-provisioned target volume. However, this action is
not prohibited.

Consider the following optional configuration:


򐂰 A thin-provisioned source volume can be copied incrementally by using FlashCopy to a
thin-provisioned target volume. Whenever the FlashCopy is performed, only data that was
modified is recopied to the target. If space is allocated on the target because of I/O to the
target volume, this space is not reclaimed with subsequent FlashCopy operations.
򐂰 A fully allocated source volume can be copied incrementally by using FlashCopy to
another fully allocated volume at the same time while it is being copied to multiple
thin-provisioned targets (taken at separate points in time). By using this combination, a
single full backup can be kept for recovery purposes and the backup workload is
separated from the production workload. At the same time, older thin-provisioned backups
can be retained.

8.4.13 Background copy


With FlashCopy background copy enabled, the source volume data is copied to the
corresponding target volume. With the FlashCopy background copy disabled, only data that
changed on the source volume is copied to the target volume.

The benefit of the use of a FlashCopy mapping with background copy enabled is that the
target volume becomes a real clone (independent from the source volume) of the FlashCopy
mapping source volume after the copy is complete. When the background copy function is not
performed, the target volume remains a valid copy of the source data only while the
FlashCopy mapping remains in place.

Chapter 8. Advanced Copy Services 429


The background copy rate is a property of a FlashCopy mapping that is defined as a value
0 - 100. The background copy rate can be defined and changed dynamically for individual
FlashCopy mappings. A value of 0 disables the background copy.

Table 8-5 shows the relationship of the background copy rate value to the attempted number
of grains to be copied per second.

Table 8-5 Background copy rate


Value Data copied per Grains per second Grains per second
second (256 KB grain) (64 KB grain)

1 - 10 128 KB 0.5 2

11 - 20 256 KB 1 4

21 - 30 512 KB 2 8

31 - 40 1 MB 4 16

41 - 50 2 MB 8 32

51 - 60 4 MB 16 64

61 - 70 8 MB 32 128

71 - 80 16 MB 64 256

81 - 90 32 MB 128 512

91 - 100 64 MB 256 1024

The grains per second numbers represent the maximum number of grains that the SVC
copies per second, assuming that the bandwidth to the managed disks (MDisks) can
accommodate this rate.

If the SVC cannot achieve these copy rates because of insufficient bandwidth from the SVC
nodes to the MDisks, the background copy I/O contends for resources on an equal basis with
the I/O that is arriving from the hosts. Background copy I/O and I/O that is arriving from the
hosts tend to see an increase in latency and a consequential reduction in throughput.
Background copy and foreground I/O continue to make progress, and do not stop, hang, or
cause the node to fail. The background copy is performed by both nodes of the I/O Group in
which the source volume is found.

8.4.14 Synthesis
The FlashCopy functionality in the SVC creates copies of the volumes. All of the data in the
source volume is copied to the destination volume, including operating system, logical volume
manager, and application metadata.

Synthesis: Certain operating systems cannot use FlashCopy without another step, which
is called synthesis. Synthesis performs a type of transformation on the operating system
metadata that is on the target volume so that the operating system can use the disk.

8.4.15 Serialization of I/O by FlashCopy


In general, the FlashCopy function in the SVC introduces no explicit serialization into the I/O
path. Therefore, many concurrent I/Os are allowed to the source and target volumes.

430 Implementing the IBM System Storage SAN Volume Controller V7.4
However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared and the mappings are derived from a particular
source volume. The lock is used in the following modes under the following conditions:
򐂰 The lock is held in shared mode during a read from the target volume, which touches a
grain that was not copied from the source.
򐂰 The lock is held in exclusive mode while a grain is being copied from the source to the
target.

If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.

If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.

Similarly, if the lock is held in exclusive mode, a process that is wanting to use the lock in
shared or exclusive mode must wait for it to be freed.

8.4.16 Event handling


When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not
affect the handling or reporting of events for error conditions that are encountered in the I/O
path. Event handling and reporting are affected only by FlashCopy when a FlashCopy
mapping is copying or stopping, that is, actively moving data.

We describe these scenarios next.

Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O Group of the source volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source volume is a
member of the failing node’s I/O Group becomes inaccessible. FlashCopy continues with a
single copy of the FlashCopy bitmap that is stored as non-volatile in the remaining node in the
source I/O Group. The system metadata is updated to indicate that the missing node no
longer holds a current bitmap. When the failing node recovers or a replacement node is
added to the I/O Group, the bitmap redundancy is restored.

Path failure (path offline state)


In a fully functioning system, all of the nodes have a software representation of every volume
in the system within their application hierarchy.

Because the storage area network (SAN) that links the SVC nodes to each other and to the
MDisks is made up of many independent links, a subset of the nodes can be temporarily
isolated from several of the MDisks. When this situation happens, the managed disks are said
to be path offline on certain nodes.

Other nodes: Other nodes might see the managed disks as online because their
connection to the managed disks is still functioning.

When an MDisk enters the path offline state on an SVC node, all of the volumes that have
extents on the MDisk also become path offline. Again, this situation happens only on the
affected nodes. When a volume is path offline on a particular SVC node, the host access to
that volume through the node fails with the SCSI check condition indicating offline.

Chapter 8. Advanced Copy Services 431


Path offline for the source volume
If a FlashCopy mapping is in the copying state and the source volume goes path offline, this
path offline state is propagated to all target volumes up to, but not including, the target volume
for the newest mapping that is 100% copied but remains in the copying state. If no mappings
are 100% copied, all of the target volumes are taken offline. Path offline is a state that exists
on a per-node basis. Other nodes might not be affected. If the source volume comes online,
the target and source volumes are brought back online.

Path offline for the target volume


If a target volume goes path offline but the source volume is still online and if any dependent
mappings exist, those target volumes also go path offline. The source volume remains online.

8.4.17 Asynchronous notifications


FlashCopy raises informational event log entries for certain mapping and Consistency Group
state transitions.

These state transitions occur as a result of configuration events that complete


asynchronously. The informational events can be used to generate Simple Network
Management Protocol (SNMP) traps to notify the user. Other configuration events complete
synchronously, and no informational events are logged as a result of the following events:
򐂰 PREPARE_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the prepared state as a result of a user request to prepare. The
user can now start (or stop) the mapping or Consistency Group.
򐂰 COPY_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the idle_or_copied state when it was in the copying or stopping
state. This state transition indicates that the target disk now contains a complete copy and
no longer depends on the source.
򐂰 STOP_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the stopped state as a result of a user request to stop. It is
logged after the automatic copy process completes. This state transition includes
mappings where no copying needed to be performed. This state transition differs from the
event that is logged when a mapping or group enters the stopped state as a result of an
I/O error.

8.4.18 Interoperation with Metro Mirror and Global Mirror


FlashCopy can work with MM and GM to provide better protection of the data. For example,
we can perform an MM copy to duplicate data from Site_A to Site_B and then perform a daily
FlashCopy to back up the data to another location.

Table 8-6 on page 433 lists the supported combinations of FlashCopy and remote copy. In the
table, remote copy refers to MM and GM.

432 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 8-6 FlashCopy and remote copy interaction
Component Remote copy primary site Remote copy secondary site

FlashCopy Supported Supported latency: When the


Source FlashCopy relationship is in the
preparing and prepared states,
the cache at the remote copy
secondary site operates in
write-through mode. This
process adds latency to the
latent remote copy relationship.

FlashCopy This combination is supported This combination is supported


Target with the following restrictions: with the major restriction that
򐂰 Issuing a stop -force the FlashCopy mapping cannot
might cause the remote be copying, stopping, or
copy relationship to be fully suspended. Otherwise, the
resynchronized. restrictions are the same as at
򐂰 The code level must be the remote copy primary site.
6.2.x or higher.
򐂰 The I/O Group must be the
same.

8.4.19 FlashCopy presets


The SVC GUI interface provides three FlashCopy presets (snapshot, clone, and backup) to
simplify the more common FlashCopy operations.

Although these presets meet most FlashCopy requirements, they do not provide support for
all possible FlashCopy options. If more specialized options are required that are not
supported by the presets, the options must be performed by using CLI commands.

In this section, we describe the three preset options and their use cases.

Snapshot
This preset creates a copy-on-write point-in-time copy. The snapshot is not intended to be an
independent copy. Instead, the copy is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.

Snapshot uses the following preset parameters:


򐂰 Background copy: None
򐂰 Incremental: No
򐂰 Delete after completion: No
򐂰 Cleaning rate: No
򐂰 Primary copy source pool: Target pool

Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate many changes to be made to the source or target volume; a
significant proportion of the volumes remains unchanged.

By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced; therefore, many snapshot copies can be used
in the environment.

Chapter 8. Advanced Copy Services 433


Snapshots are useful for providing protection against corruption or similar issues with the
validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing (including “what-if”
modeling that is based on production data) without requiring a full copy of the data to be
provisioned.

Clone
The clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.

Clone uses the following preset parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: No
򐂰 Delete after completion: Yes
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Use case
Users want a copy of the volume that they can modify without affecting the original volume.
After the clone is established, users do not expect to refresh the clone or reference the
original production data again. If the source is thin-provisioned, the target is thin-provisioned
for the auto-create target.

Backup
The backup preset creates a point-in-time replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.

Backup uses the following preset parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: Yes
򐂰 Delete after completion: No
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Use case
The user wants to create a copy of the volume that can be used as a backup if the source
becomes unavailable, as in the loss of the underlying physical controller. The user plans to
periodically update the secondary copy and does not want the overhead of creating a copy
each time (and incremental FlashCopy times are faster than full copy, which helps to reduce
the window where the new backup is not yet fully effective). If the source is thin-provisioned,
the target is thin-provisioned on this option for the auto-create target.

Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volume’s performance.

434 Implementing the IBM System Storage SAN Volume Controller V7.4
8.5 Volume mirroring and migration options
Volume mirroring is a simple RAID 1-type function that allows a volume to remain online even
when the storage pool that backs it becomes inaccessible. Volume mirroring is designed to
protect the volume from storage infrastructure failures by seamless mirroring between
storage pools.

Volume mirroring is provided by a specific volume mirroring function in the I/O stack, and
volume mirroring cannot be manipulated like a FlashCopy or other types of copy volumes.
However, this feature provides migration functionality, which can be obtained by splitting the
mirrored copy from the source or by using the “migrate to” function. Volume mirroring cannot
control back-end storage mirroring or replication.

With volume mirroring, host I/O completes when both copies are written. Before version 6.3.0,
this feature took a copy offline when it had an I/O timeout, and then resynchronized with the
online copy after it recovered. With V6.3.0, this feature is enhanced with a tunable latency
tolerance. This tolerance provides an option to give preference to losing the redundancy
between the two copies. This tunable timeout value is Latency or Redundancy.

The Latency tuning option, which is set with Storwizetask chvdisk -mirrowritepriority
latency, is the default. This behavior was available in releases before V6.3.0. It prioritizes
host I/O latency, which yields a preference to host I/O over availability.

However, you might need to give preference to redundancy in your environment when
availability is more important than I/O response time. Use the Storwizetask chvdisk -mirror
writepriority redundancy command to set the Redundancy option.

Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.

Migration offers the following options:


򐂰 Export to image mode: By using this option, you can move storage from managed mode to
image mode, which is useful if you are using the SVC as a migration device. For example,
vendor A’s product cannot communicate with vendor B’s product, but you must migrate
existing data from vendor A to vendor B. By using the Export to Image mode option, you
can migrate data by using SVC copy services functions and then return control to the
native array while maintaining access to the hosts.
򐂰 Import to image mode: By using this option, you can import an existing storage MDisk or
logical unit number (LUN) with its existing data from an external storage system without
putting metadata on it so that the existing data remains intact. After you import it, all copy
services functions can be used to migrate the storage to other locations while the data
remains accessible to your hosts.
򐂰 Volume migration by using volume mirroring and then by using split into new volume: By
using this option, you can use the available RAID 1 functionality. You create two copies of
data that initially has a set relationship (one volume with two copies - one primary and one
secondary) but then break the relationship (two volumes, both primary and no relationship
between them) to make them independent copies of data. You can use this option to
migrate data between storage pools and devices. You might use this option if you want to
move volumes to multiple storage pools. A volume can have two copies at a time, which
means that you can add only one copy to the original volume and then you have to split
those copies to create another copy of the volume.

Chapter 8. Advanced Copy Services 435


򐂰 Volume migration by using move to another pool: By using this option, you can move any
volume between storage pools without any interruption to the host access. This option is a
quicker version of the “volume mirroring and split into new volume” option. You might use
this option if you want to move volumes in a single step or you do not have a volume mirror
copy already.

Migration: Although these migration methods do not disrupt access, you must take a brief
outage to install the host drivers for your SVC if they are not installed. For more
information, see the IBM SVC Host Attachment User’s Guide, SC26-7905. Ensure that you
consult the revision of the document that applies to your SVC.

With volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. Using volume mirroring over volume migration is
beneficial because storage pools do not have to have the same extent size with volume
mirroring (a requirement with volume migration).

Note: Volume mirroring does not create a second volume before you split copies. Volume
mirroring adds a second copy of the data under the same volume so the result is one
volume presented to the host with two copies of data connected to this volume. Only
splitting copies creates another volume and then both volumes have only one copy of the
data.

Starting with firmware 7.3 and the introduction of the new cache architecture, mirrored
volume performance is significantly improved. Now, lower cache is beneath the volume
mirroring layer, which means that both copies have their own cache. This approach helps in
cases of copies of different types, for example, generic and compressed, because each copy
uses its independent cache and performs its own read prefetch. Destaging of the cache can
now be independent for each copy, so one copy does not affect the performance of a second
copy.

Also, because the Storwize destage algorithm is MDisk aware, it can tune or adapt the
destaging process, depending on the MDisk type and utilization, for each copy independently.

8.6 Native IP replication


Before we describe Remote Copy features that benefit from the use of multiple SVC systems,
it is important to describe the new partnership option that is introduced with the 7.2 version of
the SVC code: native IP replication.

8.6.1 Native IP replication technology


Remote Mirroring over IP communication is now supported on the IBM SVC and Storwize
Family systems by using Ethernet communication links. The SVC IP replication uses
innovative Bridgeworks SANSlide technology to optimize network bandwidth and utilization.
This new function enables the use of a lower-speed and lower-cost networking infrastructure
for data replication. Bridgeworks’ SANSlide technology, which is integrated into the IBM SVC
and Storwize Family Software, uses artificial intelligence to help optimize network bandwidth
utilization and adapt to changing workload and network conditions. This technology can
improve remote mirroring network bandwidth utilization up to three times, which can enable
clients to deploy a less costly network infrastructure or speed up remote replication cycles to
enhance disaster recovery effectiveness.

436 Implementing the IBM System Storage SAN Volume Controller V7.4
Note: Consider the following rules for creating remote partnerships between the SVC and
Storwize Family systems:
򐂰 An SVC is always in the replication layer.
򐂰 By default, the SVC is in the storage layer but can be changed to the replication layer.
򐂰 A system can form partnerships only with systems in the same layer.
򐂰 An SVC can virtualize an SVC only if the SVC is in the storage layer.
򐂰 With version 6.4, an SVC in the replication layer can virtualize an SVC in the storage
layer.

In a typical Ethernet network data flow, the data transfer slows down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set of
packets that are sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 8-13.

Figure 8-13 Typical Ethernet network data flow

By using the Bridgeworks SANSlide technology, this typical behavior can be eliminated with
enhanced parallelism of the data flow by using multiple virtual connections (VC) that share IP
links and addresses. The artificial intelligence engine can dynamically adjust the number of
VCs, receive window size, and packet size as appropriate to maintain optimum performance.
While the engine is waiting for one VC’s ACK, it sends more packets across other VCs. If
packets are lost from any VC, data is automatically retransmitted, as shown in Figure 8-27 on
page 456.

Figure 8-14 Optimized network data flow by using Bridgeworks SANSlide technology

Chapter 8. Advanced Copy Services 437


For more information about Bridgeworks SANSlide technology, see IBM Storwize V7000 and
SANSlide Implementation, REDP-5023.

With native IP partnership, the following Copy Services features are supported:
򐂰 MM
Referred to as synchronous replication, MM provides a consistent copy of a source virtual
disk on a target virtual disk. Data is written to the target virtual disk synchronously after it
is written to the source virtual disk so that the copy is continuously updated.
򐂰 GM and GM with Change Volumes
Referred to as asynchronous replication, GM provides a consistent copy of a source
virtual disk on a target virtual disk. Data is written to the target virtual disk asynchronously
so that the copy is continuously updated. However, the copy might not contain the last few
updates if a disaster recovery operation is performed. An added extension to GM is GM
with Change Volumes. GM with Change Volumes is the preferred method for use with
native IP replication.

8.6.2 IP partnership limitations


The following prerequisites and assumptions must be considered before IP partnership
between two SVC systems can be established:
򐂰 The SVC systems are successfully installed with SVC 7.2.0 or later code levels.
򐂰 The SVC systems have the necessary licenses that allow remote copy partnerships to be
configured between two systems. No separate license is required to enable IP
partnership.
򐂰 The storage SANs are configured correctly and the correct infrastructure to support the
SVC systems in remote copy partnerships over IP links is in place.
򐂰 The two systems must be able to ping each other and perform the discovery.
򐂰 The maximum number of partnerships between the local and remote systems, including
both IP and FC partnerships, is limited to the current maximum that is supported, which is
three partnerships (four systems total).
򐂰 Only a single partnership over IP is supported.
򐂰 A system can have simultaneous partnerships over FC and IP but with separate systems.
The FC zones between two systems must be removed before an IP partnership is
configured.
򐂰 IP partnerships are supported on both 10 Gbps links and 1 Gbps links. However, the
intermix of both on a single link is not supported.
򐂰 The maximum supported round-trip time is 80 milliseconds (ms) for 1 Gbps links.
򐂰 The maximum supported round-trip time is 10 ms for 10 Gbps links.
򐂰 The minimum supported link bandwidth is 10 Mbps.
򐂰 The inter-cluster heartbeat traffic uses 1 Mbps per link.
򐂰 Only nodes from two I/O Groups can have ports that are configured for an IP partnership.
򐂰 Migrations of remote copy relationships directly from FC-based partnerships to IP
partnerships are not supported.
򐂰 IP partnerships between the two systems can be over IPv4 or IPv6 only but not both.
򐂰 Virtual LAN (VLAN) tagging of the IP addresses that are configured for remote copy is
supported starting with Storwize code version 7.4.

438 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Management IP and iSCSI IP on the same port can be in a different network starting with
Storwize code 7.4.
򐂰 An added layer of security is provided by using Challenge Handshake Authentication
Protocol (CHAP) authentication.
򐂰 TCP ports 3260 and 3265 are used for IP partnership communications; therefore, these
ports must be open in firewalls between the systems.
򐂰 The following maximum throughput is restricted based on the use of 1 Gbps or 10 Gbps
ports:
– One 1 Gbps port might transfer up to 110 Mbps
– Two 1 Gbps ports might transfer up to 220 Mbps
– One 10 Gbps port might transfer up to 190 Mbps
– One 10 Gbps ports might transfer up to 280 Mbps

Note: The Bandwidth setting definition when the IP partnerships are created changed.
Previously, the bandwidth setting defaulted to 50 MBs and was the maximum transfer rate
from the primary site to the secondary site for initial sync/resyncs of volumes.

The Link Bandwidth setting is now configured by using Mbits not MBs. You set the Link
Bandwidth setting to a value that the communication link can sustain or to what is allocated
for replication. The Background Copy Rate setting is now a percentage of the Link
Bandwidth. The Background Copy Rate setting determines the available bandwidth for the
initial sync and resyncs or for GM with Change Volumes.

8.6.3 VLAN support


Starting with version 7.4 of Storwize code, VLAN tagging is supported for both iSCSI host
attachment and IP replication. Hosts and remote-copy operations can connect to the system
through Ethernet ports. Each traffic type has different bandwidth requirements, which can
interfere with each other if they share the same IP connections. VLAN tagging creates two
separate connections on the same IP network for different types of traffic. The system
supports VLAN configuration on both IPv4 and IPv6 connections.

When the VLAN ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication on Storwize, the appropriate VLAN settings on the Ethernet network
and servers must be configured correctly in order not to experience connectivity issues. After
the VLANs are configured, changes to the VLAN settings will disrupt iSCSI and IP replication
traffic to and from Storwize.

During the VLAN configuration for each IP address, the user must be aware that if the VLAN
settings for the local and failover ports on two nodes of an I/O Group differ, switches must be
configured so that failover VLANs are configured on the local switch ports also so that the
failover of IP addresses from a failing node to a surviving node succeeds. If failover VLANs
are not configured on the local switch ports, the user loses paths to Storwize and Storwize
storage during a node failure.

8.6.4 IP partnership and SVC terminology


The IP partnership terminology and abbreviations are listed in Table 8-7 on page 440.

Chapter 8. Advanced Copy Services 439


Table 8-7 Terminology
IP partnership and SVC terminology Description

Remote copy group or remote copy port group The following numbers group a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same remote copy group can
form remote copy connections with the partner system:
򐂰 0: Ports that are not configured for remote copy
򐂰 1: Ports that belong to remote copy port group 1
򐂰 2: Ports that belong to remote copy port group 2
Each IP address can be shared for iSCSI host-attach and
remote copy functionality. Therefore, the correct settings
must be applied to each IP address.

IP partnership Two SVC systems that are partnered to perform remote copy
over native IP links.

FC partnership Two SVC systems that are partnered to perform remote copy
over native FC links.

Failover Failure of a node within an I/O Group causes all virtual disks
that are owned by this node to fail over to the surviving node.
When the configuration node of the system fails,
management IPs also fail over to an alternative node.

Failback When the failed node rejoins the system, all IP addresses that
failed over are failed back from the surviving node to the
rejoined node, and virtual disk access is restored through this
node.

linkbandwidthmbits The linkbandwidthmbits is the aggregate bandwidth of all


physical links between two sites in Mbps.

IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.

8.6.5 States of IP partnership


The different partnership states in the IP partnership are listed in Table 8-8.

Table 8-8 IP partnership states


State Systems Support for Comments
connected active remote
copy I/O

Partially_Configured_Local No No This state indicates that the initial


discovery is complete.

Fully_Configured Yes Yes Discovery successfully completed


between two systems, and the two
systems can establish remote copy
relationships.

Fully_Configured_Stopped Yes Yes The partnership is stopped on the system.

Fully_Configured_Remote_Stopped Yes No The partnership is stopped on the remote


system.

Not_Present Yes No The two systems cannot communicate


with each other. This state is also seen
when data paths between the two systems
are not established.

440 Implementing the IBM System Storage SAN Volume Controller V7.4
State Systems Support for Comments
connected active remote
copy I/O

Fully_Configured_Exceeded Yes No There are too many systems in the


network, and the partnership from the local
system to the remote system is disabled.

Fully_Configured_Excluded No No The connection is excluded because of too


many problems, or either system cannot
support the I/O workload for the MM and
GM relationships.

The following steps must be completed to establish two systems in the IP partnerships:
1. The administrator configures the CHAP secret on both the systems. This step is not
mandatory and users can choose to not configure the CHAP secret.
2. If required, the administrator configures the system IP addresses on both local and remote
systems so that they can discover each other over the network.
3. If you want to use VLANs, configure your LAN switches and Storwize Ethernet ports to use
VLAN tagging.
4. The administrator configures the SVC ports on each node in both of the systems by using
the Storwizetask cfgportip command and completes the following steps:
a. Configure the IP addresses for remote copy data.
b. Add the IP addresses in the respective remote copy port group.
c. Define whether the host access on these ports over iSCSI is allowed.
5. The administrator establishes the partnership with the remote system from the local
system where the partnership state then transitions to the Partially_Configured_Local
state.
6. The administrator establishes the partnership from the remote system with the local
system, and if successful, the partnership state then transitions to the Fully_Configured
state, which implies that the partnerships over the IP network were successfully
established. The partnership state momentarily remains in the not_present state before
transitioning to the fully_configured state.
7. The administrator creates MM, GM, and GM with Change Volume relationships.

Partnership consideration: When the partnership is created, no master or auxiliary


status is defined or implied. The partnership is equal. The concepts of master or auxiliary
and primary or secondary apply to volume relationships only, not to system partnerships.

8.6.6 Remote copy groups


This section describes remote copy groups (or remote copy port groups) and different ways to
configure the links between the two remote systems. The two SVC systems can be
connected to each other over one link or, at most, two links. To address the requirement to
allow the SVC to know about the physical links between the two sites, the concept of remote
copy port groups was introduced.

The SVC IP addresses that are connected to the same physical link are designated with
identical remote copy port groups. The SVC supports three remote copy groups: 0, 1, and 2.
The SVC IP addresses are, by default, in remote copy port group 0. Ports in port group 0 are
not considered for creating remote copy data paths between two systems. For partnerships to

Chapter 8. Advanced Copy Services 441


be established over IP links directly, IP ports must be configured in remote copy group 1 if a
single inter-site link exists, or in remote copy groups 1 and 2 if two inter-site links exist.

You can assign one IPv4 address and one IPv6 address to each Ethernet port on the SVC
platforms. Each of these IP addresses can be shared between iSCSI host attach and the IP
partnership. The user must configure the required IP address (IPv4 or IPv6) on an Ethernet
port with a remote copy port group. The administrator might want to use IPv6 addresses for
remote copy operations and use IPv4 addresses on that same port for iSCSI host attach. This
configuration also implies that for two systems to establish an IP partnership, both systems
must have IPv6 addresses that are configured.

Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address and any other IP address that is
configured on that Ethernet port.

Note: To establish an IP partnership, each SVC node must have only a single remote copy
port group that is configured, that is, 1 or 2. The remaining IP addresses must be in remote
copy port group 0.

8.6.7 Supported configurations


The following supported configurations for an IP partnership that were in the first release are
described in this section:
򐂰 Two 2-node systems are in an IP partnership over a single inter-site link, as shown in
Figure 8-15 (configuration 1).

Figure 8-15 Single link with only one remote copy port group that is configured in each system

442 Implementing the IBM System Storage SAN Volume Controller V7.4
As shown in Figure 8-15 on page 442, two systems exist: System A and System B. A
single remote copy port group 1 is created on Node A1 on System A and on Node B2 on
System B because only a single inter-site link exists to facilitate the IP partnership traffic.
(The administrator might choose to configure the remote copy port group on Node B1 on
System B instead of Node B2.) At any time, only the IP addresses that are configured in
remote copy port group 1 on the nodes in System A and System B participate in
establishing data paths between the two systems after the IP partnerships are created. In
this configuration, no failover ports are configured on the partner node in the same I/O
Group.
This configuration has the following characteristics:
– Only one node in each system has a configured remote copy port group, and no
failover ports are configured.
– If Node A1 in System A or Node B2 in System B failed, the IP partnership stops and
enters the not_present state until the failed nodes recover.
– After the nodes recover, the IP ports fail back, the IP partnership recovers, and the
partnership state changes to the fully_configured state.
– If the inter-site system link fails, the IP partnerships transition to the not_present state.
– This configuration is not recommended because it is not resilient to node failures.
򐂰 Two 2-node systems are in an IP partnership over a single inter-site link (with configured
failover ports), as shown in Figure 8-16 (configuration 2).

Figure 8-16 Only one remote copy group on each system and nodes with failover ports configured

As shown in Figure 8-16, two systems exist: System A and System B. A single remote
copy port group 1 is configured on two Ethernet ports, one each, on Node A1 and Node
A2 on System A and similarly, on Node B1 and Node B2 on System B. Although two ports
on each system are configured for remote copy port group 1, only one Ethernet port in
each system actively participates in the IP partnership process. This selection is
determined by a path configuration algorithm that is designed to choose data paths
between the two systems to optimize performance.

Chapter 8. Advanced Copy Services 443


The other port on the partner node in the I/O Group behaves as a standby port that is
used in a node failure. If Node A1 fails in System A, the IP partnership continues servicing
replication I/O from Ethernet Port 2 because a failover port is configured on Node A2 on
Ethernet Port 2. However, discovery and path configuration logic to re-establish paths
post-failover might take time, which can cause partnerships to transition to the not_present
state for that period. The details of the particular IP port that actively participates in the IP
partnership are provided in the Storwizeinfo lsportip output (reported as used).
This configuration has the following characteristics:
– Each node in the I/O Group has the same remote copy port group that is configured.
However, only one port in that remote copy port group is active at any time at each
system.
– If Node A1 in System A or Node B2 in System B fails in its system, the rediscovery of
the IP partnerships is triggered and continues servicing the I/O from the failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
򐂰 Two 4-node systems are in an IP partnership over a single inter-site link (with failover ports
that are configured), as shown in Figure 8-17 (configuration 3).

Figure 8-17 Multinode systems single inter-site link with only one remote copy port group

444 Implementing the IBM System Storage SAN Volume Controller V7.4
As shown in Figure 8-17 on page 444, there are two 4-node systems: System A and
System B. A single remote copy port group 1 is configured on nodes A1, A2, A3, and A4
on System A at Site A, and on nodes B1, B2, B3, and B4 on System B at Site B. Although
four ports are configured for remote copy group 1, only one Ethernet port in each remote
copy port group on each system actively participates in the IP partnership process. Port
selection is determined by a path configuration algorithm. The other ports play the role of
standby ports.
If Node A1 fails in System A, the IP partnership selects one of the remaining ports that is
configured with remote copy port group 1 from any of the nodes from either of the two I/O
Groups in System A. However, it might take time (generally tens of seconds) for discovery
and path configuration logic to re-establish the paths after the failover and this process can
cause partnerships to transition to the not_present state. This result leads remote copy
relationships to stop and the administrator might need to manually verify the issues in the
event log and start the relationships or remote copy Consistency Groups, if they do not
autorecover. The details about the particular IP port that is actively participating in the IP
partnership process are provided in the Storwizeinfo lsportip view (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both I/O Groups.
However, only one port in that remote copy port group remains active and participates
in the IP partnership on each system.
– If Node A1 in System A or Node B2 in System B encountered a failure in the system,
the discovery of the IP partnerships is triggered and it continues servicing the I/O from
the failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
򐂰 An eight-node system is in an IP partnership with a four-node system over a single
inter-site link, as shown in Figure 8-18 on page 446 (configuration 4).

Chapter 8. Advanced Copy Services 445


Figure 8-18 Multinode systems with single inter-site link with only one remote copy port group

As shown in Figure 8-18, there is an eight-node system (System A in Site A) and a


four-node system (System B in Site B). A single remote copy port group 1 is configured on
nodes A1, A2, A5, and A6 on System A at Site A. Similarly, a single remote copy port
group 1 is configured on nodes B1, B2, B3, and B4 on System B.
Although there are four I/O Groups (eight nodes) in System A, any two I/O Groups at
maximum are supported to be configured for IP partnerships. If Node A1 fails in System A,
the IP partnership continues using one of the ports that is configured in the remote copy
port group from any of the nodes from either of the two I/O Groups in System A. However,
it might take time for discovery and path configuration logic to re-establish paths
post-failover and this delay might cause partnerships to transition to the not_present state.
This process can lead the remote copy relationships to stop and the administrator must
manually start them if the relationships do not auto-recover. The details of which particular
IP port is actively participating in the IP partnership process is provided in the
Storwizeinfo lsportip output (reported as used).

446 Implementing the IBM System Storage SAN Volume Controller V7.4
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both the I/O Groups
that are identified for participating in IP replication. However, only one port in that
remote copy port group remains active on each system and participates in IP
replication.
– If Node A1 in System A or Node B2 in System B fails in the system, the IP partnerships
trigger discovery and continue servicing the I/O from the failover ports.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
򐂰 Two 2-node systems exist with two inter-site links, as shown in Figure 8-19 (configuration
5).

Figure 8-19 Dual links with two remote copy groups on each system are configured

As shown in Figure 8-19, remote copy port groups 1 and 2 are configured on the nodes in
System A and System B because two inter-site links are available. In this configuration,
the failover ports are not configured on partner nodes in the I/O Group. Instead, the ports
are maintained in different remote copy port groups on both of the nodes and they remain
active and participate in the IP partnership by using both of the links.
However, if either of the nodes in the I/O Group fails (that is, if Node A1 on System A fails),
the IP partnership continues only from the available IP port that is configured in remote
copy port group 2. Therefore, the effective bandwidth of the two links is reduced to 50%.
Only the bandwidth of a single link is available until the failure is resolved.
This configuration has the following characteristics:
– Two inter-site links exist and two remote copy port groups are configured.
– Each node has only one IP port in remote copy port group 1 or 2.
– Both the IP ports in the two remote copy port groups participate simultaneously in the
IP partnerships. Therefore, both of the links are used.

Chapter 8. Advanced Copy Services 447


– During node failure or link failure, the IP partnership traffic continues from the other
available link and the port group. Therefore, if two links of 10 Mbps each are available
and you have 20 Mbps of effective link bandwidth, bandwidth is reduced to 10 Mbps
only during a failure.
– After the node failure or link failure is resolved and failback happens, the entire
bandwidth of both of the links is available as before.
򐂰 Two 4-node systems are in an IP partnership with dual inter-site links, as shown in
Figure 8-20 (configuration 6).

Figure 8-20 Multinode systems with dual inter-site links between the two systems

As shown in Figure 8-20, there are two 4-node systems: System A and System B. This
configuration is an extension of configuration 5 to a multinode multi-I/O Group
environment. As seen in this configuration, two I/O Groups exist and each node in the I/O
Group has a single port that is configured in remote copy port group 1 or 2. Although two
ports are configured in remote copy port groups 1 and 2 on each system, only one IP port
in each remote copy port group on each system actively participates in the IP partnership.
The other ports that are configured in the same remote copy port group act as standby
ports in a failure. Which port in a configured remote copy port group participates in the IP
partnership at any moment is determined by a path configuration algorithm.

448 Implementing the IBM System Storage SAN Volume Controller V7.4
In this configuration, if Node A1 fails in System A, the IP partnership traffic continues from
Node A2 (that is, remote copy port group 2) and at the same time the failover also causes
discovery in remote copy port group 1. Therefore, the IP partnership traffic continues from
Node A3 on which remote copy port group 1 is configured. The details of the particular IP
port that is actively participating in the IP partnership process is provided in the
Storwizeinfo lsportip output (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in the I/O Group 1 or 2.
However, only one port per system in both remote copy port groups remains active and
participates in the IP partnership.
– Only a single port per system from each configured remote copy port group
participates simultaneously in the IP partnership. Therefore, both of the links are used.
– During node failure or port failure of a node that is actively participating in the IP
partnership, the IP partnership continues from the alternative port because another
port is in the system in the same remote copy port group but in a different I/O Group.
– The pathing algorithm can start the discovery of an available port in the affected
remote copy port group in the second I/O Group and pathing is re-established, which
restores the total bandwidth, that is, both of the links are available to support the IP
partnership.
򐂰 An eight-node system is in an IP partnership with a four-node system over dual inter-site
links, as shown in Figure 8-21 on page 450 (configuration 7).

Chapter 8. Advanced Copy Services 449


Figure 8-21 Multinode systems (two I/O Groups on each system) with dual inter-site links between
the two systems

As shown in Figure 8-21, an eight-node System A is in Site A and a four-node System B is


in Site B. Because a maximum of two I/O Groups in the IP partnership are supported in a
system, although there are four I/O Groups (eight nodes), nodes from only two I/O Groups
are configured with remote copy port groups in System A. The remaining or all of the I/O
Groups can be configured to be remote copy partnerships over Fibre Channel. In this
configuration, two links exist and two I/O Groups are configured with remote copy port
groups 1 and 2, but path selection logic is managed by an internal algorithm. Therefore,
this configuration depends on the pathing algorithm to decide which of the nodes actively
participates in the IP partnership. Even if Node A5 and Node A6 are configured with
remote copy port groups correctly, active IP partnership traffic on both of the links might be
driven from Node A1 and Node A2 only.

450 Implementing the IBM System Storage SAN Volume Controller V7.4
If Node A1 fails in System A, IP partnership traffic continues from Node A2 (that is, remote
copy port group 2) and the failover also causes IP partnership traffic to continue from
Node A5 on which remote copy port group 1 is configured. The details of the particular IP
port actively participating in the IP partnership process are provided in the Storwizeinfo
lsportip output (reported as used).
This configuration has the following characteristics:
– Two I/O Groups exist with nodes in those I/O Groups that are configured in two remote
copy port groups because two inter-site links are available for participating in the IP
partnership. However, only one port per system in a particular remote copy port group
remains active and participates in the IP partnership.
– One port per system from each remote copy port group participates in the IP
partnership simultaneously. Therefore, both of the links are used.
– If a node or a port on the node that is actively participating in the IP partnership fails,
the remote copy data path is established from that port because another port is
available on an alternative node in the system with the same remote copy port group.
– The path selection algorithm starts discovery of the available port in the affected
remote copy port group in the alternative I/O Groups and paths are re-established,
restoring the total bandwidth across both links.
– The remaining or all of the I/O Groups can be in remote copy partnerships with other
systems.
򐂰 An example of unsupported configuration for single inter-site link is shown in Figure 8-22
(configuration 8).

Figure 8-22 Two node systems with single inter-site link and remote copy port groups are
configured

As shown in Figure 8-22, this configuration is similar to configuration 2, but it differs


because each node now has the same remote copy port group that is configured on more
than one IP port.
On any node, only one port at any time can participate in the IP partnership. Configuring
multiple ports in the same remote copy group on the same node is not supported.

Chapter 8. Advanced Copy Services 451


򐂰 An example of an unsupported configuration for dual inter-site link is shown in Figure 8-23
(configuration 9).

Figure 8-23 Dual links with two remote copy port groups with failover port groups are configured

As shown in Figure 8-23, this configuration is similar to configuration 5, but it differs


because each node now also has two ports that are configured with remote copy port
groups. In this configuration, the path selection algorithm can select a path that might
cause partnerships to transition to the not_present state and then recover.
This result is a configuration restriction. The use of this configuration is not recommended
unless the configuration restriction is lifted in future releases.
򐂰 An example deployment for configuration 2 with a dedicated inter-site link is shown in
Figure 8-24 (configuration 10).

Figure 8-24 Example deployment

452 Implementing the IBM System Storage SAN Volume Controller V7.4
In this configuration, one port on each node in System A and System B is configured in
remote copy group 1 to establish an IP partnership and to support remote copy
relationships. A dedicated inter-site link is used for IP partnership traffic and iSCSI host
attach is disabled on those ports.
The following configuration steps are used:
a. Configure system IP addresses correctly so that they can be reached over the inter-site
link.
b. Qualify whether the partnerships must be created over IPv4 or IPv6 and then assign IP
addresses and open firewall ports 3260 and 3265.
c. Configure the IP ports for remote copy on both systems by using the following settings:
• Remote copy group: 1
• Host: No
• Assign IP address
d. Check that the maximum transmission unit (MTU) levels across the network meet the
requirements as set. (The default MTU is 1500 on the SVC.)
e. Establish the IP partnerships from both of the systems.
f. After the partnerships are in the fully_configured state, you can create the remote copy
relationships.
򐂰 An example deployment for configuration 5 with ports that are shared with host access is
shown in Figure 8-25 (configuration 11).

Figure 8-25 Example deployment

In this configuration, IP ports are shared by both iSCSI hosts and the IP partnership.
The following configuration steps are used:
a. Configure the system IP addresses correctly so that they can be reached over the
inter-site link.
b. Qualify whether the IP partnerships must be created over IPv4 or IPv6 and then assign
IP addresses and open firewall ports 3260 and 3265.

Chapter 8. Advanced Copy Services 453


c. Configure the IP ports for remote copy on System A1 by using the following settings.
Node 1:
• Port 1, remote copy port group 1
• Host: Yes
• Assign IP address
Node 2:
• Port 4, remote copy port group 2
• Host: Yes
• Assign IP address
d. Configure the IP ports for remote copy on System B1 by using the following settings.
Node 1:
• Port 1, remote copy port group 1
• Host: Yes
• Assign IP address
Node 2:
• Port 4, remote copy port group 2
• Host: Yes
• Assign IP address
e. Check the MTU levels across the network and meet the requirements as set. (The
default MTU is 1500 on the SVC.)
f. Establish the IP partnerships from both systems.
g. After the partnerships are in the fully_configured state, you can create the remote copy
relationships.

8.6.8 Setting up the SVC system IP partnership by using the GUI


For more information about how to create the IP partnership between the SVC systems by
using the SVC GUI, see Chapter 10, “SAN Volume Controller operations using the GUI” on
page 655.

8.7 Remote Copy


In this section, we describe the Remote Copy services, which are a synchronous remote copy
called MM, asynchronous remote copy called GM, and Global Copy with Changed Volumes.
Remote Copy in the SVC is similar to Remote Copy in the IBM System Storage DS8000
family at a functional level, but the implementation differs.

The SVC provides a single point of control when remote copy is enabled in your network
(regardless of the disk subsystems that are used) if those disk subsystems are supported by
the SVC.

The general application of SVC Remote Copy services is to maintain two real-time
synchronized copies of a disk. Often, two copies are geographically dispersed between two
SVC systems, although it is possible to use MM or GM within a single system (within an I/O
Group). If the master copy fails, you can enable an auxiliary copy for I/O operation.

454 Implementing the IBM System Storage SAN Volume Controller V7.4
Tips: Intracluster MM/GM uses more resources within the system when compared to an
intercluster MM/GM relationship where resource allocation is shared between the systems.
Licensing must also be doubled because the source and the target are within the same
system.

Use intercluster MM/GM when possible. For mirroring volumes in the same IO Group, it is
better to use Volume Mirroring or the FlashCopy feature.

A typical application of this function is to set up a dual-site solution that uses two SVC
systems. The first site is considered the primary or production site, and the second site is
considered the backup site or failover site, which is activated when a failure at the first site is
detected.

8.7.1 Multiple SVC system mirroring


Each SVC system can maintain up to three partner system relationships, which allows as
many as four systems to be directly associated with each other. This SVC partnership
capability enables the implementation of disaster recovery (DR) solutions.

Note: For more information about restrictions and limitations of native IP replication, see
8.6.2, “IP partnership limitations” on page 438.

Figure 8-26 shows an example of a multiple system mirroring configuration.

Figure 8-26 Multiple system mirroring configuration example

Chapter 8. Advanced Copy Services 455


Software-level restrictions for multiple system mirroring: Note the following points:
򐂰 A partnership between a system that is running 6.1.0 and a system that is running a
version earlier than 4.3.1 is not supported.
򐂰 Systems in a partnership where one system is running 6.1.0 and the other system is
running 4.3.1 cannot participate in other partnerships with other systems.
򐂰 Systems that are all running 6.1.0 or 5.1.0 can participate in up to three system
partnerships.
򐂰 To use an SVC as a system partner, the SVC must have 6.3.0 or newer code and be
configured to operate in the replication layer. Layer settings are only available on the
SVC.
򐂰 To use native IP replication between SVC systems, both systems must be at SVC code
7.2 or higher. A maximum of one IP partnership is allowed per SVC system.

Object name length: SVC 6.1 supports object names up to 63 characters. Previous levels
supported object names of up to 15 characters only.

When SVC 6.1 systems are partnered with 4.3.1 and 5.1.0 systems, various object names
are truncated at 15 characters when they are displayed from 4.3.1 and 5.1.0 systems.

Supported multiple system mirroring topologies


Multiple system mirroring allows for various partnership topologies, as shown in the examples
in Figure 8-27.

The following example is a star topology: A → B, A → C, and A → D.

Figure 8-27 V7000 star topology

Figure 8-27 shows four systems in a star topology, with System A at the center. System A can
be a central DR site for the three other locations.

By using a star topology, you can migrate applications by using a process, such as the
process that is described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).

456 Implementing the IBM System Storage SAN Volume Controller V7.4
4. Synchronize to system C, and ensure that A → C is established:
– A → B, A → C, A → D, B → C, B → D, and C → D
– A → B, A → C, and B → C

Figure 8-28 shows an example of a triangle topology, for example: A → B, A → C, and B → C.

Figure 8-28 SVC triangle topology

Figure 8-29 shows an example of an SVC fully connected topology, for example: A → B, A →
C, A → D, B → D, and C → D.

Figure 8-29 SVC fully connected topology

Figure 8-29 is a fully connected mesh in which every system has a partnership to each of the
three other systems. This topology allows volumes to be replicated between any pair of
systems, for example: A → B, A → C, and B → C.

Figure 8-30 shows a daisy-chain topology.

Figure 8-30 SVC daisy-chain topology

Although systems can have up to three partnerships, volumes can be part of only one remote
copy relationship, for example, A → B.

System partnership intermix: All of the preceding topologies are valid for the intermix of
an SVC with another SVC if the SVC is set to the replication layer and running 6.3.0 code
or later.

Chapter 8. Advanced Copy Services 457


8.7.2 Importance of write ordering
Many applications that use block storage have a requirement to survive failures, such as loss
of power or a software crash, and to not lose data that existed before the failure. Because
many applications must perform large numbers of update operations in parallel, maintaining
write ordering is key to ensuring the correct operation of applications following a disruption.

An application that performs a high volume of database updates is designed with the concept
of dependent writes. With dependent writes, it is important to ensure that an earlier write
completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the application’s algorithms and can
lead to problems, such as detected or undetected data corruption.

The SVC MM and GM implementation operates in a manner that is designed to always keep
a consistent image at the secondary site. The SVC GM implementation uses complex
algorithms that operate to identify sets of data and number those sets of data in sequence.
The data is then applied at the secondary site in the defined sequence.

Operating in this manner ensures that if the relationship is in a consistent_synchronized state,


your GM target data is at least crash consistent and allows for quick recovery through your
application crash recovery facilities.

For more information about dependent writes, see 8.4.3, “Consistency Groups” on page 416.

Remote Copy Consistency Groups


A Remote Copy Consistency Group can contain an arbitrary number of relationships up to the
maximum number of MM/GM relationships that is supported by the SVC system. MM/GM
commands can be issued to a Remote Copy Consistency Group and, therefore,
simultaneously for all MM/GM relationships that are defined within that Consistency Group or
to a single MM/GM relationship that is not part of a Remote Copy Consistency Group. For
example, when a startrcconsistgrp command is issued to the Consistency Group, all of the
MM/GM relationships in the Consistency Group are started at the same time.

Figure 8-31 on page 459 shows the concept of MM Consistency Groups. The same concept
applies to GM Consistency Groups.

458 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-31 MM Consistency Group

Because MM_Relationship 1 and 2 are part of the Consistency Group, they can be handled
as one entity. The stand-alone MM_Relationship 3 is handled separately.

Certain uses of MM/GM require the manipulation of more than one relationship. Remote
Copy Consistency Groups can group relationships so that they are manipulated in unison.

Consider the following points:


򐂰 MM/GM relationships can be part of a Consistency Group, or they can be stand-alone
and, therefore, handled as single instances.
򐂰 A Consistency Group can contain zero or more relationships. An empty Consistency
Group with zero relationships in it has little purpose until it is assigned its first relationship,
except that it has a name.
򐂰 All relationships in a Consistency Group must have corresponding master and auxiliary
volumes.
򐂰 All relationships in one Consistency Group must be the same type, for example, only MM
or only GM.

Although Consistency Groups can be used to manipulate sets of relationships that do not
need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a Consistency Group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
Consistency Group.

For example, consider the case of two applications that are independent, yet they are placed
into a single Consistency Group. If an error occurs, synchronization is lost and a background
copy process is required to recover synchronization. While this process is progressing,
MM/GM rejects attempts to enable access to the auxiliary volumes of either application.

Chapter 8. Advanced Copy Services 459


If one application finishes its background copy more quickly than the other application,
MM/GM still refuses to grant access to its auxiliary volumes even though access is safe in this
case. The MM/GM policy is to refuse access to the entire Consistency Group if any part of it is
inconsistent.

Stand-alone relationships and Consistency Groups share a common configuration and state
model. All of the relationships in a non-empty Consistency Group have the same state as the
Consistency Group.

8.7.3 Remote copy intercluster communication


In traditional Fibre Channel (FC), the intercluster communication between systems in an MM
and GM partnership is performed over the SAN. In the following section, we describe this
communication path.

For more information about intercluster communication between systems in an IP


partnership, see 8.6.5, “States of IP partnership” on page 440.

Zoning
The SVC node ports on each SVC system must communicate with each other to create the
partnership. Switch zoning is critical to facilitating intercluster communication.

Intercluster communication channels


When an SVC system partnership is defined on a pair of systems, the following intercluster
communication channels are established:
򐂰 A single control channel, which is used to exchange and coordinate configuration
information
򐂰 I/O channels between each of these nodes in the systems

These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and they are repaired to maintain operation where possible. If communication
between the SVC systems is interrupted or lost, an event is logged (and the MM and GM
relationships stop).

Alerts: You can configure the SVC to raise Simple Network Management Protocol (SNMP)
traps to the enterprise monitoring system to alert on events that indicate that an
interruption in internode communication occurred.

Intercluster links
All SVC nodes maintain a database of other devices that are visible on the fabric. This
database is updated as devices appear and disappear.

Devices that advertise themselves as SVC nodes are categorized according to the SVC
system to which they belong. The SVC nodes that belong to the same system establish
communication channels between themselves and begin to exchange messages to
implement clustering and the functional protocols of the SVC.

Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they are configured together to perform a remote copy relationship.

The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.

460 Implementing the IBM System Storage SAN Volume Controller V7.4
If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.

Note: You can use chsystem with -partnerfcportmask to dedicate several Storwize FC
ports only to system-to-system traffic to ensure that remote copy is not affected by other
traffic, such as host-to-node traffic or node-to-node traffic within the same system.

8.7.4 Metro Mirror overview


MM establishes a synchronous relationship between two volumes of equal size. The volumes
in an MM relationship are referred to as the master (primary) volume and the auxiliary
(secondary) volume. Traditional FC MM is primarily used in a metropolitan area or
geographical area up to a maximum distance of 300 km (186.4 miles) to provide synchronous
replication of data. With synchronous copies, host applications write to the master volume,
but they do not receive confirmation that the write operation completed until the data is written
to the auxiliary volume. This action ensures that both the volumes have identical data when
the copy completes. After the initial copy completes, the MM function maintains a fully
synchronized copy of the source data at the target site always.

MM has the following characteristics:


򐂰 Zero recovery point objective (RPO)
򐂰 Synchronous
򐂰 Production application performance that is affected by round-trip latency

Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your MM auxiliary
location.

Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups and GM Consistency Groups (FlashCopy
Consistency Groups and GM Consistency Groups are described in 8.4, “Implementing the
SAN Volume Controller FlashCopy” on page 414).

The SVC provides intracluster and intercluster MM.

Intracluster Metro Mirror


Intracluster MM performs the intracluster copying of a volume, in which both volumes belong
to the same system and I/O Group within the system. Because it is within the same I/O
Group, bitmap space must be sufficient within the I/O Group for both sets of volumes and
licensing must be on the system.

Important: Performing MM across I/O Groups within a system is not supported.

Intercluster Metro Mirror


Intercluster MM performs intercluster copying of a volume, in which one volume belongs to a
system and the other volume belongs to a separate system.

Two SVC systems must be defined in an SVC partnership, which must be performed on both
SVC systems to establish a fully functional MM partnership.

Chapter 8. Advanced Copy Services 461


By using standard single-mode connections, the supported distance between two SVC
systems in an MM partnership is 10 km (6.2 miles), although greater distances can be
achieved by using extenders. For extended distance solutions, contact your IBM marketing
representative.

Limit: When a local fabric and a remote fabric are connected for MM purposes, the
inter-switch link (ISL) hop count between a local node and a remote node cannot exceed
seven.

8.7.5 Synchronous remote copy


MM is a fully synchronous remote copy technique that ensures that writes are committed at
both the master and auxiliary volumes before write completion is acknowledged to the host,
but only if writes to the auxiliary volumes are possible.

Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, MM suspends writes to the
auxiliary volume and allows I/O to the master volume to continue to avoid affecting the
operation of the master volumes.

Figure 8-32 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.

Figure 8-32 Write on volume in MM relationship

However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, the use of traditional FC MM has distance limitations that
are based on your performance requirements. The SVC does not support more than 300 km
(186.4 miles).

462 Implementing the IBM System Storage SAN Volume Controller V7.4
8.7.6 Metro Mirror features
SVC MM supports the following features:
򐂰 Synchronous remote copy of volumes that are dispersed over metropolitan distances.
򐂰 The SVC implements MM relationships between volume pairs, with each volume in a pair
that is managed by an SVC system (requires code version 6.3.0 or later).
򐂰 The SVC supports intracluster MM where both volumes belong to the same system (and
I/O Group).
򐂰 The SVC supports intercluster MM where each volume belongs to a separate SVC
system. You can configure a specific SVC system for partnership with another system. All
intercluster MM processing occurs between two SVC systems that are configured in a
partnership.
򐂰 Intercluster and intracluster MM can be used concurrently.
򐂰 The SVC does not require that a control network or fabric is installed to manage MM. For
intercluster MM, the SVC maintains a control link between two systems. This control link is
used to control the state and coordinate updates at either end. The control link is
implemented on top of the same FC fabric connection that the SVC uses for MM I/O.
򐂰 The SVC implements a configuration model that maintains the MM configuration and state
through major events, such as failover, recovery, and resynchronization, to minimize user
configuration action through these events.

The SVC allows the resynchronization of changed data so that write failures that occur on the
master or auxiliary volumes do not require a complete resynchronization of the relationship.

8.7.7 Metro Mirror attributes


The MM function in the SVC offers the following attributes:
򐂰 An SVC system partnership is created between two SVC systems or an SVC system and
an SVC operating in the replication layer (for intercluster MM).
򐂰 An MM relationship is created between two volumes of the same size.
򐂰 To manage multiple MM relationships as one entity, relationships can be made part of an
MM Consistency Group, which ensures data consistency across multiple MM relationships
and provides ease of management.
򐂰 When an MM relationship is started and when the background copy completes, the
relationship becomes consistent and synchronized.
򐂰 After the relationship is synchronized, the auxiliary volume holds a copy of the production
data at the primary, which can be used for DR.
򐂰 The auxiliary volume is in read-only mode when the relationship is active.
򐂰 To access the auxiliary volume, the MM relationship must be stopped with the access
option enabled, before write I/O is allowed to the auxiliary.
򐂰 The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.

8.7.8 Global Mirror


In the following topics, we describe the GM copy service, which is an asynchronous remote
copy service. It provides and maintains a consistent mirrored copy of a source volume to a
target volume.

Chapter 8. Advanced Copy Services 463


GM establishes a GM relationship between two volumes of equal size. The volumes in a GM
relationship are referred to as the master (source) volume and the auxiliary (target) volume,
which is the same as MM.

Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups.

GM writes data to the auxiliary volume asynchronously, which means that host writes to the
master volume provide the host with confirmation that the write is complete before the I/O
completes on the auxiliary volume.

GM has the following characteristics:


򐂰 Near-zero RPO
򐂰 Asynchronous
򐂰 Production application performance that is affected by I/O sequencing preparation time

Intracluster Global Mirror


Although GM is available for intracluster, it has no functional value for production use.
Intracluster MM provides the same capability with less overhead. However, leaving this
functionality in place simplifies testing and allows for client experimentation and testing (for
example, to validate server failover on a single test system). As with intracluster MM, you
must consider the increase in the license requirement because the source and target exist on
the same SVC system.

Intercluster Global Mirror


Intercluster GM operations require a pair of SVC systems that are connected by a number of
intercluster links. The two SVC systems must be defined in an SVC system partnership to
establish a fully functional GM relationship.

Limit: When a local fabric and a remote fabric are connected for GM purposes, the ISL
hop count between a local node and a remote node must not exceed seven hops.

8.7.9 Asynchronous remote copy


GM is an asynchronous remote copy technique. In asynchronous remote copy, the write
operations are completed on the primary site and the write acknowledgment is sent to the
host before it is received at the secondary site. An update of this write operation is sent to the
secondary site at a later stage, which provides the capability to perform remote copy over
distances that exceed the limitations of synchronous remote copy.

The GM function provides the same function as MM remote copy, but over long-distance links
with higher latency without requiring the hosts to wait for the full round-trip delay of the
long-distance link.

Figure 8-33 on page 465 shows that a write operation to the master volume is acknowledged
back to the host that is issuing the write before the write operation is mirrored to the cache for
the auxiliary volume.

464 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-33 GM write sequence

The GM algorithms maintain a consistent image on the auxiliary always. They achieve this
consistent image by identifying sets of I/Os that are active concurrently at the master,
assigning an order to those sets, and applying those sets of I/Os in the assigned order at the
secondary. As a result, GM maintains the features of Write Ordering and Read Stability.

The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system; therefore, the process is not
subject to the latency of the long-distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.

In SVC code 7.2, these algorithms are enhanced to optimize GM behavior and latency even
further. GM write I/O from the production SVC system to a secondary SVC system requires
serialization and sequence-tagging before being sent across the network to the remote site
(to maintain a write-order consistent copy of data). Sequence-tagged GM writes on the
secondary system are processed without the parallelism and management of write I/O
sequencing that imposes more latency on write I/Os in code versions before 7.2. As a result,
high-bandwidth GM throughput environments can experience performance impacts on the
primary system during high I/O peak periods.

Starting with code V7.2, Storwize allows more parallelism in processing and managing GM
writes on a secondary system by using the following methods:
򐂰 Nodes on the secondary system store replication writes in new redundant non-volatile
cache
򐂰 Cache content details are shared between nodes
򐂰 Cache content details are batched together to make node-to-node latency less of an issue
򐂰 Nodes intelligently apply these batches in parallel as soon as possible
򐂰 Nodes internally manage and optimize GM secondary write I/O processing

Chapter 8. Advanced Copy Services 465


Note: The V7.2 enhancements of GM require no changes in administration and
management. However, before you upgrade to SVC V7.2, you must stop all GM
relationships. The proper checks are provided in the latest Storwizeupgradetest utility.

In a failover scenario where the secondary site must become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them, for example, a transaction log replay.

GM is supported over FC, FC over IP (FCIP), FC over Ethernet (FCOE), and native IP
connections. The maximum distance cannot exceed 80 ms round trip, which is about
4,000 km (2,485.48 miles) between mirrored systems. But, starting with Storwize code V7.4,
this distance was significantly increased for certain SVC Gen2 and SVC configurations.
Figure 8-34 shows the current supported distances for GM remote copy.

Figure 8-34 Supported GM distances

8.7.10 SVC Global Mirror features


SVC GM supports the following features:
򐂰 Asynchronous remote copy of volumes that are dispersed over metropolitan-scale
distances.
򐂰 The SVC implements the GM relationship between a volume pair, with each volume in the
pair being managed by an SVC system or Storwize V7000 system. Storwize must be in
the replication layer running at least V6.3.
򐂰 The SVC supports intracluster GM where both volumes belong to the same system (and
I/O Group).
򐂰 The SVC intercluster GM is supported if each volume belongs to a separate SVC system.
An SVC system can be configured for partnership with between one and three other
systems. For more information about IP partnership restrictions, see 8.6.2, “IP partnership
limitations” on page 438.
򐂰 Intercluster and intracluster GM can be used concurrently but not for the same volume.
򐂰 The SVC does not require a control network or fabric to be installed to manage GM. For
intercluster GM, the SVC maintains a control link between the two systems. This control
link is used to control the state and to coordinate the updates at either end. The control
link is implemented on top of the same FC fabric connection that the SVC uses for GM I/O.
򐂰 The SVC implements a configuration model that maintains the GM configuration and state
through major events, such as failover, recovery, and resynchronization, to minimize user
configuration action through these events.

466 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 The SVC implements flexible resynchronization support, enabling it to resynchronize
volume pairs that experienced write I/Os to both disks and to resynchronize only those
regions that changed.
򐂰 An optional feature for GM permits a delay simulation to be applied on writes that are sent
to auxiliary volumes. The delay simulation is useful in intracluster scenarios for testing
purposes.
򐂰 As of SVC 6.3.0 and later, GM source and target volumes can be associated with Change
Volumes.

Colliding writes
Before V4.3.1, the GM algorithm required that only a single write is active on any 512-byte
logical block address (LBA) of a volume. If a further write is received from a host while the
auxiliary write is still active (even though the master write might complete), the new host write
is delayed until the auxiliary write is complete. This restriction is needed if a series of writes to
the auxiliary must be retried (which is called reconstruction). Conceptually, the data for
reconstruction comes from the master volume.

If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent.

Applications that deliver such write activity do not achieve the performance that GM is
intended to support. A volume statistic is maintained about the frequency of these collisions.
An attempt is made to allow multiple writes to a single location to be outstanding in the GM
algorithm. Master writes still need to be serialized, and the intermediate states of the master
data must be kept in a non-volatile journal while the writes are outstanding to maintain the
correct write ordering during reconstruction. Reconstruction must never overwrite data on the
auxiliary with an earlier version. The volume statistic that is monitoring colliding writes is now
limited to those writes that are not affected by this change.

Figure 8-35 shows a colliding write sequence example.

Figure 8-35 Colliding writes example

Chapter 8. Advanced Copy Services 467


The following numbers correspond to the numbers that are shown in Figure 8-35 on
page 467:
򐂰 (1) The first write is performed from the host to LBA X.
򐂰 (2) The completion of the write is acknowledged to the host even though the mirrored write
to the auxiliary volume is not yet complete.
򐂰 (1’) and (2’) Steps occur asynchronously with the first write.
򐂰 (3) The second write is performed from the host also to LBA X. If this write occurs before
(2’), the write is written to the journal file.
򐂰 (4) The completion of the second write is acknowledged to the host.

Delay simulation
An optional feature for GM permits a delay simulation to be applied on writes that are sent to
auxiliary volumes. This feature allows you to test to detect colliding writes. Therefore, you can
use this feature to test an application before the full deployment of the feature. The feature
can be enabled separately for each intracluster or intercluster GM. You specify the delay
setting by using the chsystem command and view the delay by using the lssystem command.
The gm_intra_cluster_delay_simulation field expresses the amount of time that intracluster
auxiliary I/Os are delayed. The gm_inter_cluster_delay_simulation field expresses the
amount of time that intercluster auxiliary I/Os are delayed. A value of zero disables the
feature.

Tip: If you are experiencing repeated problems with the delay on your link, ensure that the
delay simulator was correctly disabled.

8.7.11 Using Change Volumes with Global Mirror


GM is designed to achieve an RPO as low as possible so that data is as up-to-date as
possible. This design places several strict requirements on your infrastructure. In certain
situations with low network link quality or congested or overloaded hosts, you might be
affected by multiple 1920 congestion errors.

Congestion errors happen in the following primary situations:


򐂰 Congestion at the source site through the host or network
򐂰 Congestion in the network link or network path
򐂰 Congestion at the target site through the host or network

GM has functionality that is designed to address the following conditions, which might
negatively affect certain GM implementations:
򐂰 The estimation of the bandwidth requirements tends to be complex.
򐂰 Ensuring that the latency and bandwidth requirements can be met is often difficult.
򐂰 Congested hosts on the source or target site can cause disruption.
򐂰 Congested network links can cause disruption with only intermittent peaks.

To address these issues, Change Volumes were added as an option for GM relationships.
Change Volumes use the FlashCopy functionality, but they cannot be manipulated as
FlashCopy volumes because they are for a special purpose only. Change Volumes replicate
point-in-time images on a cycling period. The default is 300 seconds. Your change rate needs
to include only the condition of the data at the point in time that the image was taken, instead
of all the updates during the period. The use of this function can provide significant reductions
in replication volume.

468 Implementing the IBM System Storage SAN Volume Controller V7.4
GM with Change Volumes has the following characteristics:
򐂰 Larger RPO
򐂰 Point-in-time copies
򐂰 Asynchronous
򐂰 Possible system performance overhead because point-in-time copies are created locally

Figure 8-36 shows a simple GM relationship without Change Volumes.

Figure 8-36 GM without Change Volumes

With GM with Change Volumes, this environment looks as shown in Figure 8-37.

Figure 8-37 GM with Change Volumes

With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary GM volume at the
target site, which is then captured in another Change Volume on the target site. This
approach provides an always consistent image at the target site and protects your data from
being inconsistent during resynchronization.

How Change Volumes might save you replication traffic is shown in Figure 8-38 on page 470.

Chapter 8. Advanced Copy Services 469


Figure 8-38 GM I/O replication without Change Volumes

In Figure 8-38, you can see a number of I/Os on the source and the same number on the
target, and in the same order. Assuming that this data is the same set of data being updated
repeatedly, this approach results in wasted network traffic. The I/O can be completed much
more efficiently, as shown in Figure 8-39.

Figure 8-39 GM I/O with Change Volumes

In Figure 8-39, the same data is being updated repeatedly; therefore, Change Volumes
demonstrate significant I/O transmission savings by needing to send I/O number 16 only,
which was the last I/O before the cycling period.

You can adjust the cycling period by using the chrcrelationship -cycleperiodseconds
<60 - 86400> command from the CLI. If a copy does not complete in the cycle period, the
next cycle does not start until the prior cycle completes. For this reason, the use of Change
Volumes offers the following possibilities for RPO:
򐂰 If your replication completes in the cycling period, your RPO is twice the cycling period.
򐂰 If your replication does not complete within the cycling period, your RPO is twice the
completion time. The next cycling period starts immediately after the prior cycling period is
finished.

Carefully consider your business requirements versus the performance of GM with Change
Volumes. GM with Change Volumes increases the intercluster traffic for more frequent cycling
periods. Therefore, selecting the shortest cycle periods possible is not always the answer. In
most cases, the default must meet requirements and perform well.

Important: When you create your GM volumes with Change Volumes, ensure that you
remember to select the Change Volume on the auxiliary (target) site. Failure to do so
leaves you exposed during a resynchronization operation.

470 Implementing the IBM System Storage SAN Volume Controller V7.4
8.7.12 Distribution of work among nodes
For the best performance, MM/GM volumes must have their preferred nodes evenly
distributed among the nodes of the systems. Each volume within an I/O Group has a
preferred node property that can be used to balance the I/O load between nodes in that
group. MM/GM also uses this property to route I/O between systems.

If this preferred practice is not maintained, for example, source volumes are assigned to only
one node in the IO group, you can change the preferred node for each volume to distribute
volumes evenly between the nodes. Starting with firmware V7.3, the preferred node can be
changed without changing the IO group, which will not affect the host IO to a particular
volume. Additionally, now you can also change the preferred Storwize node for volumes that
are in a remote copy relationship. The remote copy relationship type does not matter. (The
remote copy relationship type can be MM, GM, or GM with Change Volumes.) You can
change the preferred node both to the source and target volumes that are participating in the
remote copy relationship.

8.7.13 Background copy performance


The background copy performance is subject to sufficient RAID controller bandwidth.
Performance is also subject to other potential bottlenecks, such as the intercluster fabric, and
possible contention from host I/O for the SVC bandwidth resources.

Background copy I/O is scheduled to avoid bursts of activity that might adversely affect
system behavior. An entire grain of tracks on one volume is processed at around the same
time but not as a single I/O. Double buffering is used to try to use sequential performance
within a grain. However, the next grain within the volume might not be scheduled for a while.
Multiple grains might be copied simultaneously and might be enough to satisfy the requested
rate, unless the available resources cannot sustain the requested rate.

GM paces the rate at which background copy is performed by the appropriate relationships.
Background copy occurs on relationships that are in the InconsistentCopying state with a
status of Online.

The quota of background copy (configured on the intercluster link) is divided evenly among all
of the nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node, in turn, divides its allocation evenly between the multiple relationships that are
performing a background copy.

The default value of the background copy is 25 MBps per volume.

Important: The background copy value is a system-wide parameter that can be changed
dynamically but only on a system basis and not on a relationship basis. Therefore, the copy
rate of all relationships changes when this value is increased or decreased. In systems
with many remote copy relationships, increasing this value might affect overall system or
intercluster link performance. The background copy rate can be changed between
1 - 1000 MBps.

Chapter 8. Advanced Copy Services 471


8.7.14 Thin-provisioned background copy
MM and GM relationships preserve the space-efficiency of the master. Conceptually, the
background copy process detects a deallocated region of the master and sends a special
“zero buffer” to the auxiliary. If the auxiliary volume is thin-provisioned and the region is
deallocated, the special buffer prevents a write and, therefore, an allocation. If the auxiliary
volume is not thin-provisioned or the region in question is an allocated region of a
thin-provisioned volume, a buffer of “real” zeros is synthesized on the auxiliary and written as
normal.

8.7.15 Methods of synchronization


This section describes two methods that can be used to establish a synchronized
relationship.

Full synchronization after creation


The full synchronization after creation method is the default method. It is the simplest method
because it requires no administrative activity apart from issuing the necessary commands.
However, in certain environments, the available bandwidth can make this method unsuitable.

Use the following command sequence for a single relationship:


򐂰 Run mkrcrelationship without specifying the -sync option.
򐂰 Run startrcrelationship without specifying the -clean option.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary volumes contain
identical data before creating the relationship by using the following technique:
򐂰 Both disks are created with the security delete feature to make all data zero.
򐂰 A complete tape image (or other method of moving data) is copied from one disk to the
other disk.

With this technique, do not allow I/O on the master or auxiliary before the relationship is
established.

Then, the administrator must run the following commands:


򐂰 Run mkrcrelationship with the -sync flag.
򐂰 Run startrcrelationship without the -clean flag.

Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not, therefore, creating a data loss or data integrity
exposure for hosts that access data on the auxiliary volume.

8.7.16 Practical use of Metro Mirror


The master volume is the production volume, and updates to this copy are mirrored in real
time to the auxiliary volume. The contents of the auxiliary volume that existed when the
relationship was created are destroyed.

472 Implementing the IBM System Storage SAN Volume Controller V7.4
Switching copy direction: The copy direction for an MM relationship can be switched so
that the auxiliary volume becomes the master, and the master volume becomes the
auxiliary, which is similar to the FlashCopy restore option. However, although the
FlashCopy target volume can operate in read/write mode, the target volume of the started
remote copy is always in read-only mode.

While the MM relationship is active, the auxiliary volume is not accessible for host application
write I/O at any time. The SVC allows read-only access to the auxiliary volume when it
contains a consistent image. Storwize allows boot time operating system discovery to
complete without error, so that any hosts at the secondary site can be ready to start the
applications with minimum delay, if required.

For example, many operating systems must read LBA 0 to configure a logical unit. Although
read access is allowed at the auxiliary in practice, the data on the auxiliary volumes cannot be
read by a host because most operating systems write a “dirty bit” to the file system when it is
mounted. Because this write operation is not allowed on the auxiliary volume, the volume
cannot be mounted.

This access is provided only where consistency can be ensured. However, coherency cannot
be maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.

To enable access to the auxiliary volume for host operations, you must stop the MM
relationship by specifying the -access parameter. While access to the auxiliary volume for
host operations is enabled, the host must be instructed to mount the volume before the
application can be started, or instructed to perform a recovery process.

For example, the MM requirement to enable the auxiliary copy for access differentiates it from
third-party mirroring software on the host, which aims to emulate a single, reliable disk
regardless of what system is accessing it. MM retains the property that two volumes exist, but
it suppresses one volume while the copy is being maintained.

The use of an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.

The failover process can be automated through failover management software. The SVC
provides SNMP traps and programming (or scripting) for the CLI to enable this automation.

8.7.17 Practical use of Global Mirror


The practical use of GM is similar to the use of MM that is described in 8.7.16, “Practical use
of Metro Mirror” on page 472. The main difference between the two remote copy modes is
that GM or GM with Change Volumes are mostly used on much larger distances than MM.
Weak link quality or insufficient bandwidth between the primary and secondary sites can also
be a reason to prefer asynchronous GM over synchronous MM. Otherwise, the use cases for
MM and GM are the same.

8.7.18 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror


Table 8-9 on page 474 lists the combinations of FlashCopy, MM, and GM functions that are
valid for a single volume.

Chapter 8. Advanced Copy Services 473


Table 8-9 Valid combination for a single volume
FlashCopy MM or GM source MM or GM target

FlashCopy source Supported Supported

FlashCopy target Supported Not supported

8.7.19 Remote Copy configuration limits


Table 8-10 lists the MM and GM configuration limits.

Table 8-10 MM and GM configuration limits


Parameter Value

Number of MM or GM 256
Consistency Groups per system

Number of MM or GM 8,192
relationships per system

Number of MM or GM 8,192
relationships per Consistency
Group

Total volume size per I/O Group A per I/O Group limit of 1,024 TB exists on the quantity of master
and auxiliary volume address spaces that can participate in MM
and GM relationships. This maximum configuration uses all 512
MB of bitmap space for the I/O Group, and it allows 10 MB of
space for all remaining copy services features.

8.7.20 Remote Copy states and events


In this section, we describe the various states of an MM/GM relationship and the conditions
that cause them to change.

In Figure 8-40 on page 475, the MM/GM relationship state diagram shows an overview of the
states that can apply to an MM/GM relationship in a connected state.

474 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-40 MM or GM mapping state diagram

When the MM/GM relationship is created, you can specify whether the auxiliary volume is
already in sync with the master volume, and the background copy process is then skipped.
This capability is useful when MM/GM relationships are created for volumes that were created
with the format option.

The following step identifiers are shown in Figure 8-40:


򐂰 Step 1:
a. The MM/GM relationship is created with the -sync option, and the MM/GM relationship
enters the ConsistentStopped state.
b. The MM/GM relationship is created without specifying that the master and auxiliary
volumes are in sync, and the MM/GM relationship enters the InconsistentStopped
state.
򐂰 Step 2:
a. When an MM/GM relationship is started in the ConsistentStopped state, the MM/GM
relationship enters the ConsistentSynchronized state. Therefore, no updates (write
I/Os) were performed on the master volume while in the ConsistentStopped state.
Otherwise, the -force option must be specified, and the MM/GM relationship then
enters the InconsistentCopying state while the background copy is started.
b. When an MM/GM relationship is started in the InconsistentStopped state, the MM/GM
relationship enters the InconsistentCopying state while the background copy is started.

Chapter 8. Advanced Copy Services 475


򐂰 Step 3
When the background copy completes, the MM/GM relationship transitions from the
InconsistentCopying state to the ConsistentSynchronized state.
򐂰 Step 4:
a. When an MM/GM relationship is stopped in the ConsistentSynchronized state, the
MM/GM relationship enters the Idling state when you specify the -access option,
which enables write I/O on the auxiliary volume.
b. When an MM/GM relationship is stopped in the ConsistentSynchronized state without
the -access parameter, the auxiliary volumes remain read-only and the state of the
relationship changes to ConsistentStopped.
c. To enable write I/O on the auxiliary volume, when the MM/GM relationship is in the
ConsistentStopped state, issue the command Storwizetask stoprcrelationship,
which specifies the -access option, and the MM/GM relationship enters the Idling
state.
򐂰 Step 5:
a. When an MM/GM relationship is started from the Idling state, you must specify the
-primary argument to set the copy direction. If no write I/O was performed (to the
master or auxiliary volume) while in the Idling state, the MM/GM relationship enters the
ConsistentSynchronized state.
b. If write I/O was performed to the master or auxiliary volume, the -force option must be
specified and the MM/GM relationship then enters the InconsistentCopying state while
the background copy is started. The background copy copies only the data that
changed on the primary volume while the relationship was stopped.

Stop or error
When an MM/GM relationship is stopped (intentionally or because of an error), the state
changes. For example, the MM/GM relationships in the ConsistentSynchronized state enter
the ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.

If the connection is broken between the SVC systems that are in a partnership, all
(intercluster) MM/GM relationships enter a Disconnected state. For more information, see
“Connected versus disconnected” on page 476.

Common states: Stand-alone relationships and Consistency Groups share a common


configuration and state model. All MM/GM relationships in a Consistency Group have the
same state as the Consistency Group.

State overview
in the following sections, we provide an overview of the various MM/GM states.

Connected versus disconnected


Under certain error scenarios (for example, a power failure at one site that causes one
complete system to disappear), communications between two systems in an MM/GM
relationship can be lost. Alternatively, the fabric connection between the two systems might
fail, which leaves the two systems running but they cannot communicate with each other.

When the two systems can communicate, the systems and the relationships that span them
are described as connected. When they cannot communicate, the systems and the
relationships that span them are described as disconnected.

476 Implementing the IBM System Storage SAN Volume Controller V7.4
In this state, both systems are left with fragmented relationships and are limited as far as the
configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and the configuration commands that are permitted.

When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments, considering any configuration or other event
that occurred while the relationship was disconnected. As a result, the relationship can return
to the state that it was in when the relationship became disconnected or enter a new state.

Relationships that are configured between volumes in the same SVC system (intracluster)
are never described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain volumes that are operating as secondaries can be described as
being consistent or inconsistent. Consistency Groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the auxiliary to the data on the master volume. It can
be considered a property of the auxiliary volume.

An auxiliary volume is described as consistent if it contains data that might be read by a host
system from the master if power failed at an imaginary point while I/O was in progress, and
power was later restored. This imaginary point is defined as the recovery point. The
requirements for consistency are expressed regarding activity at the master up to the
recovery point.

The auxiliary volume contains the data from all of the writes to the master for which the host
received successful completion and that data was not overwritten by a subsequent write
(before the recovery point).

For writes for which the host did not receive a successful completion (that is, it received a bad
completion or no completion at all), and the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from the
master.

From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred).

If an application is designed to cope with an unexpected power failure, this assurance of


consistency means that the application can use the auxiliary and begin operation as though it
was restarted after the hypothetical power failure. Again, maintaining the application write
ordering is the key property of consistency.

For more information about dependent writes, see 8.4.3, “Consistency Groups” on page 416.

If a relationship (or a set of relationships) is inconsistent and an attempt is made to start an


application by using the data in the secondaries, the following outcomes are possible:
򐂰 The application might decide that the data is corrupted and crash or exit with an event
code.
򐂰 The application might fail to detect that the data is corrupted and return erroneous data.
򐂰 The application might work without a problem.

Chapter 8. Advanced Copy Services 477


Because of the risk of data corruption, and in particular undetected data corruption, MM/GM
strongly enforces the concept of consistency and prohibits access to inconsistent data.

Consistency as a concept can be applied to a single relationship or a set of relationships in a


Consistency Group. Write ordering is a concept that an application can maintain across a
number of disks that are accessed through multiple systems; therefore, consistency must
operate across all those disks.

When you are deciding how to use Consistency Groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.

If two programs or systems communicate and store details as a result of the information that
is exchanged, either of the following actions might occur:
򐂰 All of the data that is accessed by the group of systems must be placed into a single
Consistency Group.
򐂰 The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the master and auxiliary volumes differ only in regions where writes are
outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.

When communication is lost for an extended period, MM/GM tracks the changes that
occurred on the master, but not the order or the details of the changes (write data). When
communication is restored, it is impossible to synchronize the auxiliary without sending write
data to the auxiliary out of order and, therefore, losing consistency.

The following policies can be used to cope with this situation:


򐂰 Make a point-in-time copy of the consistent auxiliary before you allow the auxiliary to
become inconsistent. If there is a disaster before consistency is achieved again, the
point-in-time copy target provides a consistent (although out-of-date) image.
򐂰 Accept the loss of consistency and the loss of a useful auxiliary while synchronizing the
auxiliary.

Detailed states
In the following sections, we describe the states that are portrayed to the user for either
Consistency Groups or relationships. We also describe information that is available in each
state. The major states are designed to provide guidance about the available configuration
commands.

InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent.

478 Implementing the IBM System Storage SAN Volume Controller V7.4
This state is entered when the relationship or Consistency Group was InconsistentCopying
and experienced a persistent error or received a stop command that caused the copy process
to stop.

A start command causes the relationship or Consistency Group to move to the


InconsistentCopying state. A stop command is accepted, but it has no effect.

If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O.

This state is entered after a start command is issued to an InconsistentStopped relationship


or a Consistency Group. It is also entered when a forced start is issued to an Idling or
ConsistentStopped relationship or Consistency Group.

In this state, a background copy process runs that copies data from the master to the auxiliary
volume.

In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.

A persistent error or stop command places the relationship or Consistency Group into an
InconsistentStopped state. A start command is accepted, but it has no effect.

If the background copy process completes on a stand-alone relationship or on all


relationships for a Consistency Group, the relationship or Consistency Group transitions to
the ConsistentSynchronized state.

If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master.

This state can arise when a relationship was in a ConsistentSynchronized state and
experiences an error that forces a Consistency Freeze. It can also arise when a relationship is
created with a CreateConsistentFlag set to TRUE.

Normally, write activity that follows an I/O error causes updates to the master and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must use a start command with the -force option to
acknowledge this condition, and the relationship or Consistency Group transitions to
InconsistentCopying. Enter this command only after all outstanding events are repaired.

In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can enter a switch command that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.

If the relationship or Consistency Group becomes disconnected, the auxiliary transitions to


ConsistentDisconnected. The master transitions to IdlingDisconnected.

Chapter 8. Advanced Copy Services 479


An informational status log is generated whenever a relationship or Consistency Group enters
the ConsistentStopped state with a status of Online. You can configure this event to generate
an SNMP trap that can be used to trigger automation or manual intervention to issue a start
command following a loss of synchronization.

ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O, and the auxiliary volume is accessible for read-only I/O.

Writes that are sent to the master volume are also sent to the auxiliary volume. Either
successful completion must be received for both writes, the write must be failed to the host, or
a state must transition out of the ConsistentSynchronized state before a write is completed to
the host.

A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.

A switch command leaves the relationship in the ConsistentSynchronized state, but it


reverses the master and auxiliary roles (it switches the direction of replicating data).

A start command is accepted, but it has no effect.

If the relationship or Consistency Group becomes disconnected, the same transitions are
made as for ConsistentStopped.

Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Therefore, both master and auxiliary volumes are accessible for write I/O.

In this state, the relationship or Consistency Group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine the areas that must be copied after a start command.

The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship received write I/O, which is indicated by
the Synchronized status. If the start command leads to loss of consistency, you must specify
the -force parameter.

After a start command, the relationship or Consistency Group transitions to


ConsistentSynchronized if consistency is not lost or to InconsistentCopying if consistency is
lost.

Also, the relationship or Consistency Group accepts a -clean option on the start command
while in this state. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.

IdlingDisconnected
IdlingDisconnected is a disconnected state. The target volumes in this half of the relationship
or Consistency Group are all in the master role and accept read or write I/O.

The priority in this state is to recover the link to restore the relationship or consistency.

480 Implementing the IBM System Storage SAN Volume Controller V7.4
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transitions to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or Consistency Group, which depends on the following factors:
򐂰 The state when it became disconnected
򐂰 The write activity since it was disconnected
򐂰 The configuration activity since it was disconnected

If both halves are IdlingDisconnected, the relationship becomes Idling when it is reconnected.

While IdlingDisconnected, if a write I/O is received that causes the loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event also is raised when this condition occurs for the
ConsistentSynchronized state.

InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and do not accept read or write
I/O.

Except for deletes, no configuration activity is permitted until the relationship becomes
connected again.

When the relationship or Consistency Group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions are true:
򐂰 The relationship was InconsistentStopped when it became disconnected.
򐂰 The user issued a stop command while disconnected.

In either case, the relationship or Consistency Group becomes InconsistentStopped.

ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write
I/O.

This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary


side of a relationship becomes disconnected.

In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point when Consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or Consistency Group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
system.

A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.

When the relationship or Consistency Group becomes connected again, the relationship or
Consistency Group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
򐂰 The relationship was ConsistentSynchronized when it became disconnected.
򐂰 No writes received successful completion at the master while disconnected.

Chapter 8. Advanced Copy Services 481


Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state applies only to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show.

It is entered when a Consistency Group is first created. It is exited when the first relationship
is added to the Consistency Group, at which point the state of the relationship becomes the
state of the Consistency Group.

8.8 Remote Copy commands


In this section, we describe the commands that you issue to create and operate Remote Copy
services.

8.8.1 Remote Copy process


The MM/GM process includes the following steps:
1. An SVC system partnership is created between two SVC systems (for intercluster
MM/GM).
2. An MM/GM relationship is created between two volumes of the same size.
3. To manage multiple MM/GM relationships as one entity, the relationships can be made
part of an MM/GM Consistency Group to ensure data consistency across multiple MM/GM
relationships or for ease of management.
4. The MM/GM relationship is started and when the background copy completes, the
relationship is consistent and synchronized.
5. When synchronized, the auxiliary volume holds a copy of the production data at the
master that can be used for disaster recovery.
6. To access the auxiliary volume, the MM/GM relationship must be stopped with the access
option enabled before write I/O is submitted to the auxiliary.

The remote host server is mapped to the auxiliary volume and the disk is available for I/O.

For more information about MM/GM commands, see the IBM System Storage SAN Volume
Controller and IBM Storwize V7000 Command-Line Interface User’s Guide, GC27-2287.

The command set for MM/GM contains the following broad groups:
򐂰 Commands to create, delete, and manipulate relationships and Consistency Groups
򐂰 Commands to cause state changes

If a configuration command affects more than one system, MM/GM performs the work to
coordinate configuration activity between the systems. Certain configuration commands can
be performed only when the systems are connected and fail with no effect when they are
disconnected.

Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by MM/GM when the systems become connected again.

482 Implementing the IBM System Storage SAN Volume Controller V7.4
For any command (with one exception), a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case the
system that is receiving the command is called the local system.

The exception is the command that sets systems into an MM/GM partnership. The
mkfcpartnership and mkippartnership command must be issued on the local and remote
systems.

The commands in this section are described as an abstract command set and are
implemented by either of the following methods:
򐂰 The CLI can be used for scripting and automation.
򐂰 The GUI can be used for one-off tasks.

8.8.2 Listing available SVC system partners


Use the lspartnershipcandidate command to list the systems that are available for setting
up a two-system partnership. This command is a prerequisite for creating MM/GM
relationships.

Note: This command is not supported on IP partnerships. Use the mkippartnership


command for IP connections.

8.8.3 Changing the system parameters


When you want to change system parameters that are specific to any remote copy or to GM
only, use the chsystem command.

Storwizetask chcluster command


The Storwizetask chcluster command features the following parameters for MM/GM:
򐂰 -relationshipbandwidthlimit cluster_relationship_bandwidth_limit
This parameter controls the maximum rate at which any one remote copy relationship can
synchronize. The default value for the relationship bandwidth limit is 25 MBps, but this
value can now be specified as 1 MBps - 1000 MBps. The partnership overall limit is
controlled by the chpartnership -bandwidth command, and the command must be set on
each involved system.

Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure that
connects the remote sites, regardless of the compression rates that you might achieve.

򐂰 -gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping GM relationships. Specify values 60 - 86400 seconds in increments of 10
seconds. The default value is 300. Do not change this value except under the direction of
IBM Support.

Chapter 8. Advanced Copy Services 483


򐂰 -gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to an auxiliary volume) is delayed. This parameter permits you to test performance
implications before GM is deployed and a long-distance link is obtained. Specify a value of
0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this argument
to test each intercluster GM relationship separately.
򐂰 -gmintradelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intracluster copying
to an auxiliary volume) is delayed. By using this parameter, you can test performance
implications before GM is deployed and a long-distance link is obtained. Specify a value of
0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this argument
to test each intracluster GM relationship separately.

Use the Storwizetask chcluster command to adjust these values, as shown in the following
example:
Storwizetask chcluster -gmlinktolerance 300

You can view all of these parameter values by using the Storwizeinfo lscluster
<clustername> command.

gmlinktolerance
We focus on the gmlinktolerance parameter in particular. If poor response extends past the
specified tolerance, a 1920 event is logged, and one or more GM relationships automatically
stop to protect the application hosts at the primary site. During normal operations, application
hosts experience a minimal effect from the response times because the GM feature uses
asynchronous replication.

However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations begin to queue at the primary system. This queue
results in an extended response time to application hosts. In this situation, the
gmlinktolerance feature stops GM relationships and the application host’s response time
returns to normal. After a 1920 event occurs, the GM auxiliary volumes are no longer in the
consistent_synchronized state until you fix the cause of the event and restart your GM
relationships. For this reason, ensure that you monitor the system to track when these 1920
events occur.

You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
򐂰 During SAN maintenance windows in which degraded performance is expected from SAN
components and application hosts can withstand extended response times from GM
volumes.
򐂰 During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.

484 Implementing the IBM System Storage SAN Volume Controller V7.4
A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).

If 1920 events are occurring, it can be necessary to use a performance monitoring and
analysis tool, such as the IBM Tivoli Storage Productivity Center, to help identify and resolve
the problem.

8.8.4 SVC system partnership


To create an SVC system partnership, use the Storwizetask mkfcpartnership command for
traditional Fibre Channel (FC or FCoE) connections or use the Storwizetask
mkippartnership command for IP-based connections.

Storwizetask mkfcpartnership command


Use the Storwizetask mkfcpartnership command to establish a one-way MM/GM
partnership between the local system and a remote system. Alternatively, use Storwizetask
mkippartnership to create an IP-based partnership.

To establish a fully functional MM/GM partnership, you must issue this command on both
systems. This step is a prerequisite for creating MM/GM relationships between volumes on
the SVC systems.

When the partnership is created, you can specify the bandwidth to be used by the
background copy process between the local SVC system and the remote SVC system. If it is
not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that
is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth effect on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy is
attempted for MM/GM. The background copy bandwidth can affect foreground I/O latency in
one of the following ways:
򐂰 The following result can occur if the background copy bandwidth is set too high compared
to the MM/GM intercluster link capacity:
– The background copy I/Os can back up on the MM/GM intercluster link.
– There is a delay in the synchronous auxiliary writes of foreground I/Os.
– The foreground I/O latency increases as perceived by applications.
򐂰 If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
򐂰 If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary site overload the auxiliary storage and again
delay the synchronous secondary writes of foreground I/Os.

To set the background copy bandwidth optimally, ensure that you consider all three resources:
primary storage, intercluster link bandwidth, and auxiliary storage. Provision the most
restrictive of these three resources between the background copy bandwidth and the peak
foreground I/O workload. Perform this provisioning by calculation or by determining
experimentally how much background copy can be allowed before the foreground I/O latency
becomes unacceptable. Then, reduce the background copy to accommodate peaks in
workload and another safety margin.

Chapter 8. Advanced Copy Services 485


Storwizetask chpartnership command
To change the bandwidth that is available for background copy in an SVC system partnership,
use the Storwizetask chpartnership command to specify the new bandwidth.

8.8.5 Creating a Metro Mirror/Global Mirror Consistency Group


Use the mkrcconsistgrp command to create an empty MM/GM Consistency Group.

The MM/GM Consistency Group name must be unique across all Consistency Groups that
are known to the systems that own this Consistency Group. If the Consistency Group involves
two systems, the systems must be in communication throughout the creation process.

The new Consistency Group does not contain any relationships, and it is in the Empty state.
You can add MM/GM relationships to the group (upon creation or afterward) by using the
chrelationship command.

8.8.6 Creating a Metro Mirror/Global Mirror relationship


Use the mkrcrelationship command to create a new MM/GM relationship. This relationship
persists until it is deleted.

Optional parameter: If you do not use the -global optional parameter, an MM relationship
is created instead of a GM relationship.

The auxiliary volume must be equal in size to the master volume or the command fails. If both
volumes are in the same system, they must be in the same I/O Group. The master and
auxiliary volume cannot be in an existing relationship and they cannot be the targets of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when it
is successful.

When the MM/GM relationship is created, you can add it to a Consistency Group that exists
or it can be a stand-alone MM/GM relationship if no Consistency Group is specified.

The Storwizeinfo lsrcrelationshipcandidate command


Use the lsrcrelationshipcandidate command to list the volumes that are eligible to form an
MM/GM relationship.

When the command is issued, you can specify the master volume name and auxiliary system
to list the candidates that comply with the prerequisites to create an MM/GM relationship. If
the command is issued with no parameters, all of the volumes that are not disallowed by
another configuration state, such as being a FlashCopy target, are listed.

8.8.7 Changing a Metro Mirror/Global Mirror relationship


Use the chrcrelationship command to modify the following properties of an MM/GM
relationship:
򐂰 Change the name of an MM/GM relationship.
򐂰 Add a relationship to a group.
򐂰 Remove a relationship from a group by using the -force flag.

486 Implementing the IBM System Storage SAN Volume Controller V7.4
Adding an MM/GM relationship: When an MM/GM relationship is added to a
Consistency Group that is not empty, the relationship must have the same state and copy
direction as the group to be added to it.

8.8.8 Changing a Metro Mirror/Global Mirror Consistency Group


Use the chrcconsistgrp command to change the name of an MM/GM Consistency Group.

8.8.9 Starting a Metro Mirror/Global Mirror relationship


Use the startrcrelationship command to start the copy process of an MM/GM relationship.

When the command is issued, you can set the copy direction if it is undefined, and, optionally,
you can mark the auxiliary volume of the relationship as clean. The command fails if it is used
as an attempt to start a relationship that is already a part of a Consistency Group.

You can issue this command only to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
by a stop command or by an I/O error.

If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship. The use of the -force parameter here is a reminder
that the data on the auxiliary becomes inconsistent while resynchronization (background
copying) occurs and, therefore, is unusable for DR purposes before the background copy
completes.

In the Idling state, you must specify the master volume to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.

8.8.10 Stopping a Metro Mirror/Global Mirror relationship


Use the stoprcrelationship command to stop the copy process for a relationship. You can
also use this command to enable write access to a consistent auxiliary volume by specifying
the -access parameter.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a


relationship that is part of a Consistency Group. You can issue this command to stop a
relationship that is copying from master to auxiliary.

If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.

When a relationship is in a consistent state (that is, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the stoprcrelationship command to enable write access to the auxiliary
volume.

Chapter 8. Advanced Copy Services 487


8.8.11 Starting a Metro Mirror/Global Mirror Consistency Group
Use the startrcconsistgrp command to start an MM/GM Consistency Group. You can issue
this command only to a Consistency Group that is connected.

For a Consistency Group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.

8.8.12 Stopping a Metro Mirror/Global Mirror Consistency Group


Use the Storwizetask startrcconsistgrp command to stop the copy process for an
MM/GM Consistency Group. You can also use this command to enable write access to the
auxiliary volumes in the group if the group is in a consistent state.

If the Consistency Group is in an inconsistent state, any copy operation stops and does not
resume until you issue the Storwizetask startrcconsistgrp command. Write activity is no
longer copied from the master to the auxiliary volumes that belong to the relationships in the
group. For a Consistency Group in the ConsistentSynchronized state, this command causes
a Consistency Freeze.

When a Consistency Group is in a consistent state (for example, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the Storwizetask stoprcconsistgrp command to enable write access to the
auxiliary volumes within that group.

8.8.13 Deleting a Metro Mirror/Global Mirror relationship


Use the Storwizetask rmrcrelationship command to delete the relationship that is
specified. Deleting a relationship deletes only the logical relationship between the two
volumes. It does not affect the volumes themselves.

If the relationship is disconnected at the time that the command is issued, the relationship is
deleted only on the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.

Alternatively, if the systems are disconnected and you still want to remove the relationship on
both systems, you can issue the rmrcrelationship command independently on both of the
systems.

A relationship cannot be deleted if it is part of a Consistency Group. You must first remove the
relationship from the Consistency Group.

If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.

8.8.14 Deleting a Metro Mirror/Global Mirror Consistency Group


Use the Storwizetask rmrcconsistgrp command to delete an MM/GM Consistency Group.
This command deletes the specified Consistency Group. You can issue this command for any
existing Consistency Group.

488 Implementing the IBM System Storage SAN Volume Controller V7.4
If the Consistency Group is disconnected at the time that the command is issued, the
Consistency Group is deleted only on the system on which the command is being run. When
the systems reconnect, the Consistency Group is automatically deleted on the other system.

Alternatively, if the systems are disconnected and you still want to remove the Consistency
Group on both systems, you can issue the Storwizetask rmrcconsistgrp command
separately on both of the systems.

If the Consistency Group is not empty, the relationships within it are removed from the
Consistency Group before the group is deleted. These relationships then become
stand-alone relationships. The state of these relationships is not changed by the action of
removing them from the Consistency Group.

8.8.15 Reversing a Metro Mirror/Global Mirror relationship


Use the Storwizetask switchrcrelationship command to reverse the roles of the master
volume and the auxiliary volume when a stand-alone relationship is in a consistent state.
When the command is issued, the master that you want must be specified.

8.8.16 Reversing a Metro Mirror/Global Mirror Consistency Group


Use the Storwizetask switchrcconsistgrp command to reverse the roles of the master
volume and the auxiliary volume when a Consistency Group is in a consistent state. This
change is applied to all of the relationships in the Consistency Group. When the command is
issued, the master that you want must be specified.

Important: Remember, by reversing the roles, your current source volumes become
targets and target volumes become source volumes. Therefore, you will lose write access
to your current primary volumes.

8.9 Troubleshooting remote copy


Remote copy (MM and GM) has two primary error codes that are displayed: 1920 and 1720.
A 1920 is a congestion error. This error means that the source, the link between the source
and target, or the target cannot keep up with the requested copy rate. A 1720 error is a
heartbeat or system partnership communication error. This error often is more serious
because failing communication between your system partners involves extended diagnostic
time.

8.9.1 1920 error


A 1920 error (event ID 050010) can have several triggers, including the following probable
causes:
򐂰 Primary 2145 system or SAN fabric problem (10%)
򐂰 Primary 2145 system or SAN fabric configuration (10%)
򐂰 Secondary 2145 system or SAN fabric problem (15%)
򐂰 Secondary 2145 system or SAN fabric configuration (25%)
򐂰 Intercluster link problem (15%)
򐂰 Intercluster link configuration (25%)

Chapter 8. Advanced Copy Services 489


In practice, the most often overlooked cause is latency. GM has a round-trip-time tolerance
limit of 80 or 250 milliseconds, depending on the firmware version and the hardware model.
See Figure 8-34 on page 466. A message that is sent from your source SVC system to your
target SVC system and the accompanying acknowledgment must have a total time of 80 or
250 milliseconds round trip. Latency must be up to 40 or 125 milliseconds each way.

The primary component of your round-trip time is the physical distance between sites. For
every 1,000 kilometers (621.36 miles), you observe a 5-millisecond delay each way. This
delay does not include the time that is added by equipment in the path. Every device adds a
varying amount of time depending on the device, but a good rule is 25 microseconds for pure
hardware devices. For software-based functions (such as compression that is implemented in
applications), the added delay tends to be much higher (usually in the millisecond plus
range.) Next, we describe an example of a physical delay.

Company A has a production site that is 1,900 kilometers (1,180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide Fibre Channel over IP (FCIP) to encapsulate the FC traffic between sites.

Now, there are seven devices, and 1,900 kilometers (1,180.6 miles) of distance delay. All the
devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds
each way, for a total of 19 milliseconds. When combined with the device latency, the delay is
19.4 milliseconds of physical latency minimum, which is under the 80-millisecond limit of GM
until you realize that this number is the best case number.

The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link; therefore, ensure that you stay as far beneath the GM
round-trip-time (RTT) limit as possible. You can easily double or triple the expected physical
latency with a lower-quality network link or a lower-bandwidth network link. Then, you are
within the range of exceeding the limit if high I/O occurs that exceeds the existing bandwidth
capacity.

When you get a 1920 event, always check the latency first. The FCIP routing layer can
introduce latency if it is not correctly configured. If your network provider reports a much lower
latency, you might have a problem at your FCIP routing layer. Most FCIP routing devices have
built-in tools to allow you to check the RTT. When you are checking latency, remember that
TCP/IP routing devices (including FCIP routers) report RTT or round-trip time by using
standard 64-byte ping packets.

In Figure 8-41 on page 491, you can see why the effective transit time must be measured only
by using packets that are large enough to hold an FC frame, or 2,148 bytes (2,112 bytes of
payload and 36 bytes of header). Allow some overhead to be safe because various switch
vendors have optional features that might increase this size. After you verify your latency by
using the proper packet size, proceed with normal hardware troubleshooting.

Before we proceed, we look at the second largest component of your RTT, which is
serialization delay. Serialization delay is the amount of time that is required to move a packet
of data of a specific size across a network link of a certain bandwidth. The required time to
move a specific amount of data decreases as the data transmission rate increases.
Figure 8-41 on page 491 shows the orders of magnitude of difference between the link
bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient.
Never use a TCP/IP ping to measure RTT for FCIP traffic.

490 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 8-41 Effect of packet size (in bytes) versus the link size

In Figure 8-41, the amount of time in microseconds that is required to transmit a packet
across network links of varying bandwidth capacity is compared. The following packet sizes
are used:
򐂰 64 bytes: The size of the common ping packet
򐂰 1,500 bytes: The size of the standard TCP/IP packet
򐂰 2.148 bytes: The size of an FC frame

Finally, your path maximum transmission unit (MTU) affects the delay that is incurred to get a
packet from one location to another location. An MTU might cause fragmentation or be too
large and cause too many retransmits when a packet is lost.

8.9.2 1720 error


The 1720 error (event ID 050020) is the other problem remote copy might encounter. The
amount of bandwidth that is needed for system-to-system communications varies based on
the number of nodes. It is important that it is not zero. When a partner on either side stops
communication, you see a 1720 error appear in your error log. According to the product
documentation, there are no likely field-replaceable unit breakages or other causes.

The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for zoning
of more than one host bus adapter (HBA) port for each node per I/O Group if your fabric has
more than 64 HBA ports that are zoned. One port for each node per I/O Group per fabric that
is associated with the host is the recommended zoning configuration for fabrics.

Chapter 8. Advanced Copy Services 491


For those fabrics with 64 or more host ports, this recommendation becomes a rule. Therefore,
you will see four paths to each volume that is discovered on the host because each host
needs to have at least two FC ports from separate HBA cards, each in a separate fabric. On
each fabric, each host FC port is zoned to two of the Storwize ports, and each Storwize port
comes from one Storwize node. Therefore, you have four paths per host volume. More than
four paths per volume are supported but not recommended.

Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer through IBM Tivoli Storage Productivity
Center and comparing against your sample interval reveals potential SAN congestion. If a
zero buffer credit timer is above 2% of the total time of the sample interval, it might cause
problems.

Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences can indicate a larger problem.

If you receive multiple 1720 errors, recheck your network connection and then check the
Storwize partnership information to verify its status and settings. Then, proceed to perform
diagnostics for every piece of equipment in the path between the two Storwize systems. It
often helps to have a diagram that shows the path of your replication from both logical and
physical configuration viewpoints.

If your investigations fail to resolve your remote copy problems, contact your IBM Support
representative for a complete analysis.

492 Implementing the IBM System Storage SAN Volume Controller V7.4
9

Chapter 9. SAN Volume Controller


operations using the
command-line interface
In this chapter, we describe operational management. We use the command-line interface
(CLI) to demonstrate normal operation and advanced operation. You can use the CLI or GUI
to manage IBM System Storage SAN Volume Controller (SVC) operations. We use the CLI in
this chapter. You can script these operations. We think that it is easier to create the
documentation for the scripts by using the CLI. This chapter assumes a fully functional SVC
environment and includes the following topics:
򐂰 Normal operations by using CLI
򐂰 New commands and functions
򐂰 Working with managed disks and disk controller systems
򐂰 Working with hosts
򐂰 Working with the Ethernet port for iSCSI
򐂰 Working with volumes
򐂰 Scripting under the CLI for SAN Volume Controller task automation
򐂰 SAN Volume Controller advanced operations by using the CLI
򐂰 Managing the clustered system by using the CLI
򐂰 Nodes
򐂰 I/O Groups
򐂰 Managing authentication
򐂰 Managing Copy Services
򐂰 Metro Mirror operation
򐂰 Global Mirror operation
򐂰 Service and maintenance
򐂰 Backing up the SAN Volume Controller system configuration
򐂰 Restoring the SAN Volume Controller clustered system configuration
򐂰 Working with the SAN Volume Controller quorum MDisks
򐂰 Working with the Service Assistant menu
򐂰 SAN troubleshooting and data collection
򐂰 T3 recovery process

© Copyright IBM Corp. 2015. All rights reserved. 493


9.1 Normal operations by using CLI
In the following topics, we describe the commands that best represent normal operational
commands.

9.1.1 Command syntax and online help

Command prefix changes: The svctask and svcinfo command prefixes are no longer
needed when you are issuing a command. If you have existing scripts that use those
prefixes, they continue to function. You do not need to change your scripts.

The following major command sets are available:


򐂰 By using the svcinfo command, you can query the various components within the SVC
environment.
򐂰 By using the svctask command, you can change the various components within the SVC.

When the command syntax is shown, you see certain parameters in square brackets, for
example [parameter]. These brackets indicate that the parameter is optional in most (if not
all) instances. Any information that is not in square brackets is required information. You can
view the syntax of a command by entering one of the following commands:
򐂰 svcinfo -? shows a complete list of informational commands.
򐂰 svctask -? shows a complete list of task commands.
򐂰 svcinfo commandname -? shows the syntax of informational commands.
򐂰 svctask commandname -? shows the syntax of task commands.
򐂰 svcinfo commandname -filtervalue? shows the filters that you can use to reduce the
output of the informational commands.

Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask
commandname -h command.

If you review the syntax of the command by entering svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.

Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.

Using shortcuts
You can use the shortcuts command to display a list of display or execution commands. This
command produces an alphabetical list of actions that are supported. The command parameter
must be svcinfo for display commands or svctask for execution commands. The model
parameter allows for different shortcuts on different platforms, 2145 or 2076, as shown in the
following example:

<command> Shortcuts <model>

Example 9-1 on page 495 is a full list of all shortcut commands.

494 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-1 Shortcut commands
IBM_2145:ITSO_SVC2:superuser>svctask shortcuts 2145
addhostiogrp
addhostport
addmdisk
addnode
addvdiskaccess
addvdiskcopy
applydrivesoftware
applysoftware
cancellivedump
cfgportip
charray
charraymember
chauthservice
chcontroller
chcurrentuser
chdrive
chemail
chemailserver
chemailuser
chenclosure
chenclosurecanister
chenclosureslot
chencryption
cherrstate
cheventlog
chfcconsistgrp
chfcmap
chhost
chiogrp
chldap
chldapserver
chlicense
chmdisk
chmdiskgrp
chnode
chnodebattery
chnodebootdrive
chnodehw
chpartnership
chquorum
chrcconsistgrp
chrcrelationship
chsecurity
chsite
chsnmpserver
chsyslogserver
chsystem
chsystemip
chuser
chusergrp
chvdisk
cleardumps
clearerrlog

Chapter 9. SAN Volume Controller operations using the command-line interface 495
cpdumps
detectmdisk
dumpallmdiskbadblocks
dumpauditlog
dumperrlog
dumpmdiskbadblocks
enablecli
expandvdisksize
finderr
includemdisk
migrateexts
migratetoimage
migratevdisk
mkarray
mkcloudmdisk
mkemailserver
mkemailuser
mkfcconsistgrp
mkfcmap
mkfcpartnership
mkhost
mkippartnership
mkldapserver
mkmdiskgrp
mkpartnership
mkrcconsistgrp
mkrcrelationship
mksnmpserver
mksyslogserver
mkuser
mkusergrp
mkvdisk
mkvdiskhostmap
movevdisk
ping
preplivedump
prestartfcconsistgrp
prestartfcmap
recoverarray
recoverarraybysystem
recovervdisk
recovervdiskbyiogrp
recovervdiskbysystem
repairsevdiskcopy
repairvdiskcopy
resetleds
rmarray
rmemailserver
rmemailuser
rmfcconsistgrp
rmfcmap
rmhost
rmhostiogrp
rmhostport
rmldapserver

496 Implementing the IBM System Storage SAN Volume Controller V7.4
rmmdisk
rmmdiskgrp
rmnode
rmpartnership
rmportip
rmportip_tms
rmrcconsistgrp
rmrcrelationship
rmsnmpserver
rmsyslogserver
rmuser
rmusergrp
rmvdisk
rmvdiskaccess
rmvdiskcopy
rmvdiskhostmap
sendinventoryemail
setdisktrace
setlocale
setpwdreset
setsystemtime
settimezone
settrace
shrinkvdisksize
splitvdiskcopy
startemail
startfcconsistgrp
startfcmap
startrcconsistgrp
startrcrelationship
startstats
starttrace
stopemail
stopfcconsistgrp
stopfcmap
stoprcconsistgrp
stoprcrelationship
stopsystem
stoptrace
switchrcconsistgrp
switchrcrelationship
testemail
triggerdrivedump
triggerenclosuredump
triggerlivedump
writesernum

Chapter 9. SAN Volume Controller operations using the command-line interface 497
The use of reverse-i-search
If you work on your SVC with the same PuTTY session for many hours and enter many
commands, scrolling back to find your previous or similar commands can be a time-intensive
task. In this case, the use of the reverse-i-search command can help you quickly and easily
find any command that you issued in the history of your commands by using the Ctrl+R keys.
By using Ctrl+R, you can interactively search through the command history as you enter
commands. Pressing Ctrl+R at an empty command prompt gives you a prompt, as shown in
Example 9-2.

Example 9-2 Using reverse-i-search


IBM_2145:ITSO_SVC2:superuser>lsiogrp
id name node_count vdisk_count host_count
0 io_grp0 2 23 1
1 io_grp1 0 0 1
2 io_grp2 0 0 1
3 io_grp3 0 0 1
4 recovery_io_grp 0 0 0
(reverse-i-search)`s': lsiogrp

As shown in Example 9-2, we ran a lsiogrp command. By pressing Ctrl+R and entering s,
the command that we needed was recalled from history.

9.2 New commands and functions


The commands that are shown in Example 9-3 were introduced in version 7.4.0.0.

Example 9-3 New commands


Child Pool:

In current SVC system, the disk space of a storage pool is from mdisks, so the
capacity of a storage pool depends on the mdisks' capacity. Creating/Splitting
storage pool is not free. User cannot freely make a storage pool with a particular
capacity they want. Child pool is a new object that is created from physical
storage pool and provides most of the functions that mdiskgrps have (e.g. vdisk
creation), but user can specify the capacity of the child pool at creation.

In EFS, GPFS has a lot of difficult requirements when creating or expanding the
filesystem. Users need to create the disks as network shared disks (NSDs) with the
description of the disk usage (data, metadata) which specified the type of data to
be stored on the disk, then use the NSDs to create the filesystem. It is not easy
for the users without GPFS experience to manage the filesystem and disks. By
defining an internal vdisk creation interface in child pool, EFS system can
automatically manage the filesystem disks without interaction with users. They
need to give a quota for metadata and data child pools. The filesystem disks are
automatically provisioned for GPFS. We try to hide the filesystem disk
manipulation from users.

An example to create a ChildPool within an existing Midiskgroup (Storage Pool):

mkmdiskgrp -name ChildPool_1 -unit gb -size 40 -parentmdiskgrp 1

498 Implementing the IBM System Storage SAN Volume Controller V7.4
security settings:

With this command, you change the security level for the GUI “Graphical User
Interface” access.

Command to execute for enabling additional GUI security :

chsecurity -sslprotocol “security_level”

1= allow TLS 1.0, 1.1 and 1.2, disallow SSL 3.0


2 = additionally disallow TLS 1.0 and TLS 1.1
3 = additionally disallow TLS 1.2 cipher suites that are not exclusive to 1.2

WARNING: Changing the security level could affect the GUI connection. If this
happens the SSH CLI can be used to change the security level to a known good
level.

To view the current security level, issue the command lssecurity using the SSH
CLI.

Volume protection:
Currently, if you issue the rmvdisk command you can delete a vdisk (volume) unless
that vdisk has a host mapping or is part of a flash, metro mirror, or global
mirror copy. In any of these exceptions, the rmvdisk command will fail and the
user will have to use the -force flag to override that failure.

When the vdisk protection is enabled, you will be protected for unintentionally
deleting a volume even with the -force parameter added, within whatever time
period you have decided on.
If the last I/O was within the specified time period, the rmvdisk command will
fail and the user will have to either wait until the volume really is considered
idle, or disable the system setting, delete/unmap the volume and re-enable the
setting.

Here we have enabled the vdiskprotection with a period of 60 minutes.

IBM_2145:ITSO_SVC2:superuser>chsystem -vdiskprotectionenabled yes


-vdiskprotectiontime 60

- no output when commands are issued

The next shows that the vdiskprotection are disabled

IBM_2145:ITSO_SVC2:superuser>chsystem -vdiskprotectionenabled no

- no output when commands are issued

issue the command lssystem to verify that the vdiskprotection are enabled or
disabled

if enabled you will see in the bottom fields:

Chapter 9. SAN Volume Controller operations using the command-line interface 499
vdisk_protection_time 60
vdisk_protection_enabled yes
product_name IBM SAN Volume Controller

if disabled/not enabled you will see in the bottom fields:

vdisk_protection_time 0
vdisk_protection_enabled no
product_name IBM SAN Volume Controller

Please note that the minimum time for vdiskprotection is 15 minutes and maximum
time is 1440 minutes.

SVC 7.4.0.0 includes command changes and the addition of attributes and variables for
several existing commands. For more information, see the command reference or help, which
is available at this website:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/landing/SVC_welcome.html

9.3 Working with managed disks and disk controller systems


This section describes various configuration and administrative tasks for managed disks
(MDisks) within the SVC environment and the tasks that you can perform at a disk controller
level.

9.3.1 Viewing disk controller details


Use the lscontroller command to display summary information about all available back-end
storage systems.

To display more detailed information about a specific controller, run the command again and
append the controller name parameter, for example, controller ID 4, as shown in Example 9-4.

Example 9-4 lscontroller command


IBM_2145:ITSO_SVC2:superuser>lscontroller 4
id 4
controller_name DS 3400
WWNN 200600A0B85AD223
mdisk_link_count 6
max_mdisk_link_count 6
degraded no
vendor_id IBM
product_id_low 1726-4xx
product_id_high FAStT
product_revision 0617
ctrl_s/n
allow_quorum yes
fabric_type fc
site_id 2
site_name site2
WWPN 203600A0B85AD223
path_count 6
max_path_count 6

500 Implementing the IBM System Storage SAN Volume Controller V7.4
WWPN 202600A0B85AD223
path_count 6
max_path_count 6

9.3.2 Renaming a controller


Use the chcontroller command to change the name of a storage controller. To verify the
change, run the lscontroller command. Example 9-5 shows both of these commands.

Example 9-5 chcontroller command


IBM_2145:ITSO_SVC2:superuser>chcontroller -name DS_3400 4
IBM_2145:ITSO_SVC2:superuser>lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
product_id_high
0 V7000_Gen2 2076 IBM 2145
1 controller1 2076 IBM 2145
2 controller2 2076 IBM 2145
3 controller3 2076 IBM 2145
4 DS_3400 IBM 1726-4xx
FAStT

This command renames the controller that is named DS 3400 to DS_3400.

Choosing a new name: The chcontroller command specifies the new name first. You
can use letters A - Z, a to z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “controller” because this prefix is reserved for SVC assignment only.

9.3.3 Discovery status


Use the lsdiscoverystatus command to determine whether a discovery operation is in
progress, as shown in Example 9-6. The output of this command is a status of active or
inactive.

Example 9-6 lsdiscoverystatus command


IBM_2145:ITSO_SVC2:superuser>lsdiscoverystatus
id scope IO_group_id IO_group_name status
0 fc_fabric inactive

This command displays the state of all discoveries in the clustered system. During discovery,
the system updates the drive and MDisk records. You must wait until the discovery finishes
and is inactive before you attempt to use the system. This command displays one of the
following results:
򐂰 Active: A discovery operation is in progress at the time that the command is issued.
򐂰 Inactive: No discovery operations are in progress at the time that the command is issued.

Chapter 9. SAN Volume Controller operations using the command-line interface 501
9.3.4 Discovering MDisks
The clustered system detects the MDisks automatically when they appear in the network.
However, certain Fibre Channel (FC) controllers do not send the required Small Computer
System Interface (SCSI) primitives that are necessary to automatically discover the new
MDisks.

If new storage was attached and the clustered system did not detect the new storage, you
might need to run this command before the system can detect the new MDisks.

Use the detectmdisk command to scan for newly added MDisks, as shown in Example 9-7.

Example 9-7 detectmdisk


IBM_2145:ITSO_SVC2:superuser>detectmdisk

To check whether any newly added MDisks were successfully detected, run the lsmdisk
command and look for new unmanaged MDisks.

If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk
subsystem and that the zones are set up correctly.

Discovery process: If you assigned many logical unit numbers (LUNs) to your SVC, the
discovery process can take time. Check several times by using the lsmdisk command to
see whether all the expected MDisks are present.

When all the disks that are allocated to the SVC are seen from the SVC system, the following
procedure is a useful way to verify the MDisks that are unmanaged and ready to be added to
the storage pool.

Complete the following steps to display MDisks:


1. Enter the lsmdiskcandidate command, as shown in Example 9-8. This command displays
all detected MDisks that are not part of a storage pool.

Example 9-8 lsmdiskcandidate command


IBM_2145:ITSO_SVC2:superuser>lsmdiskcandidate
id
0
1
2
.
.

Alternatively, you can list all MDisks (managed or unmanaged) by running the lsmdisk
command, as shown in Example 9-9.

Example 9-9 lsmdisk command


IBM_2145:ITSO_SVC2:superuser>lsmdisk -filtervalue controller_name=controller1
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier encrypt
3 mdisk3 online managed 1 test_pool_01 128.0GB 0000000000000000
controller1
600507680282818b300000000000001e00000000000000000000000000000000 enterprise no

502 Implementing the IBM System Storage SAN Volume Controller V7.4
4 mdisk4 online managed 1 test_pool_01 128.0GB 0000000000000001
controller1
600507680282818b300000000000001f00000000000000000000000000000000 enterprise no
5 mdisk5 online managed 1 test_pool_01 128.0GB 0000000000000002
controller1
600507680282818b300000000000002000000000000000000000000000000000 enterprise no

From this output, you can see more information, such as the status, about each MDisk.
For our current task, we are interested only in the unmanaged disks because they are
candidates for a storage pool.

Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.

2. If not all of the MDisks that you expected are visible, rescan the available FC network by
entering the detectmdisk command, as shown in Example 9-10.

Example 9-10 detectmdisk command


IBM_2145:ITSO_SVC2:superuser>detectmdisk

3. If you run the lsmdiskcandidate command again and your MDisk or MDisks are still not
visible, check that the LUNs from your subsystem were correctly assigned to the SVC and
that the appropriate zoning is in place (for example, the SVC can see the disk subsystem).

9.3.5 Viewing MDisk information


When you are viewing information about the MDisks (managed or unmanaged), you can use
the lsmdisk command to display overall summary information about all available managed
disks. To display more detailed information about a specific MDisk, run the command again
and append the -mdisk name parameter (for example, mdisk0).

The overview command is lsmdisk -delim, as shown in Example 9-11.

Example 9-11 lsmdisk command

IBM_2145:ITSO_SVC2:superuser>lsmdisk -delim " "


id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier encrypt
0 mdisk0 online unmanaged 100.0GB 0000000000000000 V7000_Gen2
6005076400820008380000000000000000000000000000000000000000000000 nearline no
1 mdisk1 online unmanaged 100.0GB 0000000000000001 V7000_Gen2
6005076400820008380000000000000100000000000000000000000000000000 enterprise no
2 mdisk2 online managed 3 DS3400_pool1 100.0GB 0000000000000002 V7000_Gen2
6005076400820008380000000000000200000000000000000000000000000000 enterprise no
3 mdisk3 online managed 1 test_pool_01 128.0GB 0000000000000000 controller1
600507680282818b300000000000001e00000000000000000000000000000000 enterprise no
4 mdisk4 online managed 1 test_pool_01 128.0GB 0000000000000001 controller1
600507680282818b300000000000001f00000000000000000000000000000000 enterprise no
5 mdisk5 online managed 1 test_pool_01 128.0GB 0000000000000002 controller1
600507680282818b300000000000002000000000000000000000000000000000 enterprise no
6 mdisk6 online managed 0 CompressedV7000 30.0GB 0000000000000003 V7000_Gen2
6005076400820008380000000000000600000000000000000000000000000000 enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004 V7000_Gen2
6005076400820008380000000000000700000000000000000000000000000000 enterprise no

Chapter 9. SAN Volume Controller operations using the command-line interface 503
8 mdisk8 online managed 0 CompressedV7000 30.0GB 0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB 0000000000000001 DS 3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB 0000000000000004 DS 3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB 0000000000000000 DS 3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB 0000000000000003 DS 3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB 0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB 0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB 0000000000000002 DS 3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB 0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB 0000000000000005 DS 3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no

The summary for an individual MDisk is lsmdisk name or ID. Include the name or ID of the
MDisk from which you want the information, as shown in Example 9-12.

Example 9-12 Usage of the lsmdisk name or ID command


IBM_2145:ITSO_SVC2:superuser>lsmdisk 6
id 6
name mdisk6
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name CompressedV7000
capacity 30.0GB
quorum_index
block_size 512
controller_name V7000_Gen2
ctrl_type 4
ctrl_WWNN 500507680B0021A8
controller_id 0
path_count 8
max_path_count 8
ctrl_LUN_# 0000000000000003
UID 6005076400820008380000000000000600000000000000000000000000000000
preferred_WWPN 500507680B2121A8
active_WWPN many
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier enterprise

504 Implementing the IBM System Storage SAN Volume Controller V7.4
slow_write_priority
fabric_type fc
site_id 1
site_name site1
easy_tier_load high
encrypt no

9.3.6 Renaming an MDisk


Use the chmdisk command to change the name of an MDisk. When you use this command,
be aware that the new name is listed first, and the ID or name of the MDisk to be renamed is
listed next. Use this format: chmdisk -name <new name> <current ID/name>. Use the lsmdisk
command to verify the change. Example 9-13 shows the chmdisk command.

Example 9-13 chmdisk command


IBM_2145:ITSO_SVC2:superuser>chmdisk -name mdisk_0 mdisk0

This command renamed the MDisk that is named mdisk0 to mdisk_0.

The chmdisk command: The chmdisk command specifies the new name first. You can
use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name
can be 1 - 63 characters. However, the new name cannot start with a number, dash, or the
word “MDisk” because this prefix is reserved for SVC assignment only.

9.3.7 Including an MDisk


If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These
errors can result from a hardware problem, a SAN problem, or poorly planned maintenance. If
the error is a hardware fault, you can receive a Simple Network Management Protocol
(SNMP) alert about the state of the disk subsystem (before the disk was excluded), and you
can undertake preventive maintenance. If not, the hosts that were using virtual disks
(VDisks), which used the excluded MDisk, now have I/O errors.

By running the lsmdisk command, you can see that mdisk8 is excluded, as shown in
Example 9-14.

Example 9-14 lsmdisk command: Excluded MDisk


IBM_2145:ITSO_SVC2:superuser>lsmdisk -delim " "
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier encrypt
0 mdisk0 online unmanaged 100.0GB 0000000000000000 V7000_Gen2
6005076400820008380000000000000000000000000000000000000000000000 nearline no
1 mdisk1 online unmanaged 100.0GB 0000000000000001 V7000_Gen2
6005076400820008380000000000000100000000000000000000000000000000 enterprise no
2 mdisk2 online managed 3 DS3400_pool1 100.0GB 0000000000000002 V7000_Gen2
6005076400820008380000000000000200000000000000000000000000000000 enterprise no
3 mdisk3 online managed 1 test_pool_01 128.0GB 0000000000000000 controller1
600507680282818b300000000000001e00000000000000000000000000000000 enterprise no
4 mdisk4 online managed 1 test_pool_01 128.0GB 0000000000000001 controller1
600507680282818b300000000000001f00000000000000000000000000000000 enterprise no
5 mdisk5 online managed 1 test_pool_01 128.0GB 0000000000000002 controller1
600507680282818b300000000000002000000000000000000000000000000000 enterprise no

Chapter 9. SAN Volume Controller operations using the command-line interface 505
6 mdisk6 online managed 0 CompressedV7000 30.0GB 0000000000000003 V7000_Gen2
6005076400820008380000000000000600000000000000000000000000000000 enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004 V7000_Gen2
6005076400820008380000000000000700000000000000000000000000000000 enterprise no
8 mdisk8 excluded managed 0 CompressedV7000 30.0GB 0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB 0000000000000001 DS 3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB 0000000000000004 DS 3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB 0000000000000000 DS 3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB 0000000000000003 DS 3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB 0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB 0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB 0000000000000002 DS 3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB 0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB 0000000000000005 DS 3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no

After the necessary corrective action is taken to repair the MDisk (replace the failed disk,
repair the SAN zones, and so on), we must include the MDisk again. We issue the
includemdisk command (Example 9-15) because the SVC system does not include the
MDisk automatically.

Example 9-15 includemdisk


IBM_2145:ITSO_SVC2:superuser>includemdisk mdisk8

Running the lsmdisk command again shows that mdisk8 is online again, as shown in
Example 9-16.

Example 9-16 lsmdisk command: Verifying that an MDisk is included


IBM_2145:ITSO_SVC2:superuser>lsmdisk -delim " "
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier encrypt
0 mdisk0 online unmanaged 100.0GB 0000000000000000 V7000_Gen2
6005076400820008380000000000000000000000000000000000000000000000 nearline no
1 mdisk1 online unmanaged 100.0GB 0000000000000001 V7000_Gen2
6005076400820008380000000000000100000000000000000000000000000000 enterprise no
2 mdisk2 online managed 3 DS3400_pool1 100.0GB 0000000000000002 V7000_Gen2
6005076400820008380000000000000200000000000000000000000000000000 enterprise no
3 mdisk3 online managed 1 test_pool_01 128.0GB 0000000000000000 controller1
600507680282818b300000000000001e00000000000000000000000000000000 enterprise no
4 mdisk4 online managed 1 test_pool_01 128.0GB 0000000000000001 controller1
600507680282818b300000000000001f00000000000000000000000000000000 enterprise no
5 mdisk5 online managed 1 test_pool_01 128.0GB 0000000000000002 controller1
600507680282818b300000000000002000000000000000000000000000000000 enterprise no
6 mdisk6 online managed 0 CompressedV7000 30.0GB 0000000000000003 V7000_Gen2
6005076400820008380000000000000600000000000000000000000000000000 enterprise no

506 Implementing the IBM System Storage SAN Volume Controller V7.4
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004 V7000_Gen2
6005076400820008380000000000000700000000000000000000000000000000 enterprise no
8 mdisk8 online managed 0 CompressedV7000 30.0GB 0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB 0000000000000001 DS 3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB 0000000000000004 DS 3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB 0000000000000000 DS 3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB 0000000000000003 DS 3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB 0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB 0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB 0000000000000002 DS 3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB 0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB 0000000000000005 DS 3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no

9.3.8 Adding MDisks to a storage pool


If you created an empty storage pool or you assign more MDisks to your configured storage
pool, you can use the addmdisk command to populate the storage pool, as shown in
Example 9-17.

Example 9-17 addmdisk command


IBM_2145:ITSO_SVC2:superuser>addmdisk -mdisk mdisk6 STGPool_Multi_Tier

You can add only unmanaged MDisks to a storage pool. This command adds the MDisk
named mdisk6 to the storage pool that is named STGPool_Multi_Tier.

Important: Do not add this MDisk to a storage pool if you want to create an image mode
volume from the MDisk that you are adding. When you add an MDisk to a storage pool, it
becomes managed and extent mapping is not necessarily one-to-one anymore.

9.3.9 Showing MDisks in a storage pool


Use the lsmdisk -filtervalue command (as shown in Example 9-18) to see the MDisks that
are part of a specific storage pool. This command shows all of the MDisks that are part of a
storage pool if they belong to the storage subsystem that is named CompressedV7000.

Example 9-18 lsmdisk -filtervalue: MDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC2:superuser>lsmdisk -filtervalue mdisk_grp_name=CompressedV7000
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier encrypt

Chapter 9. SAN Volume Controller operations using the command-line interface 507
6 mdisk6 online managed 0 CompressedV7000 30.0GB 0000000000000003
V7000_Gen2 6005076400820008380000000000000600000000000000000000000000000000
enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004
V7000_Gen2 6005076400820008380000000000000700000000000000000000000000000000
enterprise no
8 mdisk8 online managed 0 CompressedV7000 30.0GB 0000000000000005
V7000_Gen2 6005076400820008380000000000000800000000000000000000000000000000
enterprise no

By using a wildcard with this command, you can see all of the MDisks that are present in the
storage pools that are named CompressedV7000* (the asterisk (*) indicates a wildcard).

9.3.10 Working with a storage pool


Before we can create any volumes on the SVC clustered system, we must virtualize the
allocated storage that is assigned to the SVC. After we assign volumes to the SVC’s
managed disks, we cannot start using them until they are members of a storage pool.
Therefore, one of our first operations is to create a storage pool where we can place our
MDisks.

This section describes the operations that use MDisks and the storage pool. It also explains
the tasks that we can perform at the storage pool level.

9.3.11 Creating a storage pool


After a successful login to the CLI of the SVC, we create the storage pool.

Create a storage pool by using the mkmdiskgrp command, as shown in Example 9-19.

Example 9-19 mkmdiskgrp command


IBM_2145:ITSO_SVC2:super>mkmdiskgrp -name CompressedV7000 -ext 256
MDisk Group, id [3], successfully created

This command creates a storage pool that is called CompressedV7000. The extent size that is
used within this group is 256 MiB. We did not add any MDisks to the storage pool, so it is an
empty storage pool.

You can add unmanaged MDisks and create the storage pool in the same command. Use the
mkmdiskgrp command with the -mdisk parameter and enter the IDs or names of the MDisks to
add the MDisks immediately after the storage pool is created.

Before the creation of the storage pool, enter the lsmdisk command, as shown in
Example 9-20. This command lists all of the available MDisks that are seen by the SVC
system.

Example 9-20 Listing the available MDisks


IBM_2145:ITSO_SVC2:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
encrypt
0 mdisk0 online unmanaged 100.0GB
0000000000000000 V7000_Gen2
6005076400820008380000000000000000000000000000000000000000000000 nearline no

508 Implementing the IBM System Storage SAN Volume Controller V7.4
1 mdisk1 online unmanaged 100.0GB
0000000000000001 V7000_Gen2
6005076400820008380000000000000100000000000000000000000000000000 enterprise no
2 mdisk2 online managed 3 DS3400_pool1 100.0GB
0000000000000002 V7000_Gen2
6005076400820008380000000000000200000000000000000000000000000000 enterprise no
3 mdisk3 online managed 1 test_pool_01 128.0GB
0000000000000000 controller1
600507680282818b300000000000001e00000000000000000000000000000000 enterprise no
4 mdisk4 online managed 1 test_pool_01 128.0GB
0000000000000001 controller1
600507680282818b300000000000001f00000000000000000000000000000000 enterprise no
5 mdisk5 online managed 1 test_pool_01 128.0GB
0000000000000002 controller1
600507680282818b300000000000002000000000000000000000000000000000 enterprise no
6 mdisk6 online managed 0 CompressedV7000 30.0GB
0000000000000003 V7000_Gen2
6005076400820008380000000000000600000000000000000000000000000000 enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB
0000000000000004 V7000_Gen2
6005076400820008380000000000000700000000000000000000000000000000 enterprise no
8 mdisk8 online managed 0 CompressedV7000 30.0GB
0000000000000005 V7000_Gen2
6005076400820008380000000000000800000000000000000000000000000000 enterprise no
9 mdisk9 online unmanaged 4.0GB
0000000000000001 DS_3400
600a0b80005ad223000009a2545a3f0d00000000000000000000000000000000 enterprise no
10 mdisk16 online unmanaged 10.0GB
0000000000000004 DS_3400
600a0b80005ad2230000090e5458bdde00000000000000000000000000000000 enterprise no
11 mdisk11 online unmanaged 3.0GB
0000000000000000 DS_3400
600a0b80005ad223000009a1545a3c0900000000000000000000000000000000 enterprise no
12 mdisk12 online unmanaged 20.0GB
0000000000000003 DS_3400
600a0b80005ad2230000090f5458be5900000000000000000000000000000000 enterprise no
13 mdisk13 online image 5 Migration_Out 10.0GB
0000000000000006 V7000_Gen2
6005076400820008380000000000000c00000000000000000000000000000000 enterprise no
14 mdisk14 online unmanaged 20.0GB
0000000000000007 V7000_Gen2
6005076400820008380000000000000d00000000000000000000000000000000 enterprise no
15 mdisk15 online unmanaged 7.0GB
0000000000000002 DS_3400
600a0b80005ad223000009ab545ccec400000000000000000000000000000000 enterprise no
16 martin_test_source online unmanaged 8.0GB
0000000000000008 V7000_Gen2
6005076400820008380000000000001000000000000000000000000000000000 enterprise no
17 martin_test_target online unmanaged 9.0GB
0000000000000005 DS_3400
600a0b80005ad223000009d4545cee2d00000000000000000000000000000000 enterprise no
IBM_2145:ITSO_SVC2:superuser>

By using the same command (mkmdiskgrp) and knowing the MDisk IDs that we are using, we
can add multiple MDisks to the storage pool at the same time. We now add the unmanaged
MDisks to the storage pool that we created, as shown in Example 9-21 on page 510.

Chapter 9. SAN Volume Controller operations using the command-line interface 509
Example 9-21 Creating a storage pool and adding available MDisks
IBM_2145:ITSO_SVC2:superuser>mkmdiskgrp -name ITSO_Pool1 -ext 256 -mdisk 0:1
MDisk Group, id [2], successfully created

This command creates a storage pool that is called ITSO_Pool1. The extent size that is used
within this group is 256 MiB, and two MDisks (6 and 8) are added to the storage pool.

Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a
-name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the
SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created.

If you want to provide a name, you can use letters A - Z, a - z, numbers 0 - 9, and the
underscore (_). The name can be 1 - 63 characters, but it cannot start with a number or the
word “MDiskgrp” because this prefix is reserved for SVC assignment only.

By running the lsmdisk command, you now see the MDisks as managed and as part of the
CompressedV7000, as shown in Example 9-22.

Example 9-22 lsmdisk command


IBM_2145:ITSO_SVC2:superuser>lsmdisk -filtervalue mdisk_grp_name=CompressedV7000
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier encrypt
6 mdisk6 online managed 0 CompressedV7000 30.0GB 0000000000000003
V7000_Gen2 6005076400820008380000000000000600000000000000000000000000000000
enterprise no
7 mdisk7 online managed 0 CompressedV7000 30.0GB 0000000000000004
V7000_Gen2 6005076400820008380000000000000700000000000000000000000000000000
enterprise no
8 mdisk8 online managed 0 CompressedV7000 30.0GB 0000000000000005
V7000_Gen2 6005076400820008380000000000000800000000000000000000000000000000
enterprise no

In SVC 7.4, you can also create a child pool, which is a storage pool that is inside a parent
pool (Example 9-24).

Example 9-23 Creating a child pool

IBM_2145:ITSO_SVC2:superusermkmdiskgrp -name ChildPool01 -unit gb -size 20 -parentmdiskgrp


DS_3400>

Now, the required tasks to create a storage pool are complete.

9.3.12 Viewing storage pool information


Use the lsmdiskgrp command to display information about the storage pools that are
defined in the SVC, as shown in Example 9-24.

Example 9-24 lsmdiskgrp command


IBM_2145:ITSO_SVC2:superuser>lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_
capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st

510 Implementing the IBM System Storage SAN Volume Controller V7.4
atus:compression_active:compression_virtual_capacity:compression_compressed_capaci
ty:compression_uncompressed_capacity:parent_mdisk_grp_id:parent_mdisk_grp_name:chi
ld_mdisk_grp_count:child_mdisk_grp_capacity:type:encrypt
0:CompressedV7000:online:3:0:90.00GB:1024:90.00GB:0.00MB:0.00MB:0.00MB:0:80:auto:b
alanced:no:0.00MB:0.00MB:0.00MB:0:CompressedV7000:0:0.00MB:parent:no
1:test_pool_01:online:3:14:381.00GB:1024:366.00GB:14.00GB:11.00GB:11.07GB:3:80:off
:inactive:no:0.00MB:0.00MB:0.00MB:1:test_pool_01:0:0.00MB:parent:no
2:MigrationPool_8192:online:0:0:0:8192:0:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no
:0.00MB:0.00MB:0.00MB:2:MigrationPool_8192:0:0.00MB:parent:
3:DS3400_pool1:online:1:8:100.00GB:1024:42.00GB:62.00GB:57.00GB:57.02GB:62:80:auto
:balanced:no:0.00MB:0.00MB:0.00MB:3:DS3400_pool1:0:0.00MB:parent:no
5:Migration_Out:online:0:0:0:1024:0:0.00MB:0.00MB:0.00MB:0:80:auto:balanced:no:0.0
0MB:0.00MB:0.00MB:5:Migration_Out:0:0.00MB:parent:
6:MigrationPool_1024:online:0:0:0:1024:0:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no
:0.00MB:0.00MB:0.00MB:6:MigrationPool_1024:0:0.00MB:parent:

9.3.13 Renaming a storage pool


Use the chmdiskgrp command to change the name of a storage pool. To verify the change,
run the lsmdiskgrp command. Example 9-25 shows both of these commands.

Example 9-25 chmdiskgrp command


IBM_2145:ITSO_SVC2:superuser>chmdiskgrp -name STGPool_DS3500-2_new 1
IBM_2145:ITSO_SVC2:superuser>lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_
capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st
atus
0:STGPool_DS3500-1:online:3:11:382.50GB:256:62.50GB:320.00GB:320.00GB:320.00GB:83:
0:auto:inactive
1:STGPool_DS3500-2_new:online:3:11:384.00GB:256:262.00GB:122.00GB:122.00GB:122.00G
B:31:0:auto:inactive
2:STGPool_DS5000-1:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:in
active
3:STGPool_Multi_Tier:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:
inactive

This command renamed the storage pool STGPool_DS3500-1 to STGPool_DS3500-2_new as


shown.

Changing the storage pool: The chmdiskgrp command specifies the new name first. You
can use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “mdiskgrp” because this prefix is reserved for SVC assignment only.

9.3.14 Deleting a storage pool


Use the rmmdiskgrp command to remove a storage pool from the SVC system configuration,
as shown in Example 9-26.

Example 9-26 rmmdiskgrp


IBM_2145:ITSO_SVC2:superuser>rmmdiskgrp STGPool_DS3500-2_new

Chapter 9. SAN Volume Controller operations using the command-line interface 511
This command removes storage pool STGPool_DS3500-2_new from the SVC system
configuration.

Removing a storage pool from the SVC system configuration: If there are MDisks
within the storage pool, you must use the -force flag to remove the storage pool from the
SVC system configuration, as shown in the following example:
rmmdiskgrp STGPool_DS3500-2_new -force

Confirm that you want to use this flag because it destroys all mapping information and data
that is held on the volumes. The mapping information and data cannot be recovered.

9.3.15 Removing MDisks from a storage pool


Use the rmmdisk command to remove an MDisk from a storage pool, as shown in
Example 9-27.

Example 9-27 rmmdisk command


IBM_2145:ITSO_SVC2:superuser>rmmdisk -mdisk 8 -force 2

This command removes the MDisk with ID 8 from the storage pool with ID 2. The -force flag
is set because volumes are using this storage pool.

Sufficient space: The removal occurs only space is sufficient to migrate the volume’s data
to other extents on other MDisks that remain in the storage pool. After you remove the
MDisk from the storage pool, changing the mode from managed to unmanaged takes time,
depending on the size of the MDisk that you are removing.

9.4 Working with hosts


In this section, we explain the tasks that you can perform at a host level. When we create a
host in our SVC system, we must define the connection method. Starting with SVC 5.1, we
define our host as iSCSI-attached or FC-attached.

9.4.1 Creating an FC-attached host


In the following sections, we describe how to create an FC-attached host under various
circumstances.

Host is powered on, connected, and zoned to the SAN Volume Controller
When you create your host on the SVC, it is a preferred practice to check whether the host
bus adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By
checking, you ensure that zoning is done and that the correct WWPN is used. Run the
lshbaportcandidate command, as shown in Example 9-28.

Example 9-28 lshbaportcandidate command


IBM_2145:ITSO_SVC2:superuser>lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA

512 Implementing the IBM System Storage SAN Volume Controller V7.4
After you know the WWPNs that are displayed, match your host (use host or SAN switch
utilities to verify) and use the mkhost command to create your host.

Name: If you do not provide the -name parameter, the SVC automatically generates the
name hostx (where x is the ID sequence number that is assigned by the SVC internally).

You can use the letters A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore
(_). The name can be 1 - 63 characters. However, the name cannot start with a number,
dash, or the word “host” because this prefix is reserved for SVC assignment only.

The command to create a host is shown in Example 9-29.

Example 9-29 mkhost command


IBM_2145:ITSO_SVC2:superuser>mkhost -name Almaden -hbawwpn
210000E08B89C1CD:210000E08B054CAA
Host, id [2], successfully created

This command creates a host that is called Almaden that uses WWPN
21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA.

Ports: You can define 1 - 8 ports per host, or you can use the addport command, which is
shown in 9.4.5, “Adding ports to a defined host” on page 516.

Host is not powered on or not connected to the SAN


If you want to create a host on the SVC without seeing your target WWPN by using the
lshbaportcandidate command, add the -force flag to your mkhost command, as shown in
Example 9-30. This option is more open to human error than if you choose the WWPN from a
list, but it is often used when many host definitions are created at the same time, such as
through a script.

In this case, you can enter the WWPN of your HBA or HBAs and use the -force flag to create
the host, regardless of whether they are connected, as shown in Example 9-30.

Example 9-30 mkhost -force command


IBM_2145:ITSO_SVC2:superuser>mkhost -name Almaden -hbawwpn
210000E08B89C1CD:210000E08B054CAA -force
Host, id [2], successfully created

This command forces the creation of a host that is called Almaden that uses WWPN
210000E08B89C1CD:210000E08B054CAA.

WWPNs: WWPNs are not case sensitive in the CLI.

9.4.2 Creating an iSCSI-attached host


Now, we can create host definitions to a host that is not connected to the SAN but that has
LAN access to our SVC nodes. Before we create the host definition, we configure our SVC
systems to use the new iSCSI connection method. For more information about configuring
your nodes to use iSCSI, see 9.9.3, “iSCSI configuration” on page 552.

Chapter 9. SAN Volume Controller operations using the command-line interface 513
The iSCSI functionality allows the host to access volumes through the SVC without being
attached to the SAN. Back-end storage and node-to-node communication still need the FC
network to communicate, but the host does not necessarily need to be connected to the SAN.

When we create a host that uses iSCSI as a communication method, iSCSI initiator software
must be installed on the host to initiate the communication between the SVC and the host.
This installation creates an iSCSI qualified name (IQN) identifier that is needed before we
create our host.

Before we start, we check our server’s IQN address (we are running Windows Server 2008).
We select Start → Programs → Administrative tools, and we select iSCSI initiator. The
IQN in our example is iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com, as
shown in Figure 9-1.

Figure 9-1 IQN from the iSCSI initiator tool

We create the host by issuing the mkhost command, as shown in Example 9-31. When the
command completes successfully, we display our created host.

Example 9-31 mkhost command

IBM_2145:ITSO_SVC2:superuser>mkhost -name Baldur -iogrp 0 -iscsiname


iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
Host, id [4], successfully created
IBM_2145:ITSO_SVC2:superuser>lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111
iogrp_count 1

514 Implementing the IBM System Storage SAN Volume Controller V7.4
iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 0
state offline

Important: When the host is initially configured, the default authentication method is set to
no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is
set. To set a CHAP secret for authenticating the iSCSI host with the SVC system, use the
chhost command with the chapsecret parameter.

The host definition is created. We map a volume to our new iSCSI server, as shown in
Example 9-32. We created the volume, as described in 9.6.1, “Creating a volume” on
page 520. In our scenario, our volume’s ID is 21 and the host name is Baldur. We map it to
our iSCSI host.

Example 9-32 Mapping a volume to the iSCSI host


IBM_2145:ITSO_SVC2:superuser>mkvdiskhostmap -host Baldur 21
Virtual Disk to Host map, id [0], successfully created

After the volume is mapped to the host, we display the host information again, as shown in
Example 9-33.

Example 9-33 lshost command


IBM_2145:ITSO_SVC2:superuser>lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111
iogrp_count 1
iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 1
state online

Tip: FC hosts and iSCSI hosts are handled in the same way operationally after they are
created.

If you must display a CHAP secret for a defined server, use the lsiscsiauth command. The
lsiscsiauth command lists the CHAP secret that is configured for authenticating an entity to
the SVC system.

Chapter 9. SAN Volume Controller operations using the command-line interface 515
9.4.3 Modifying a host
Use the chhost command to change the name of a host. To verify the change, run the lshost
command. Example 9-34 shows both of these commands.

Example 9-34 chhost command


IBM_2145:ITSO_SVC2:superuser>chhost -name Angola Guinea

IBM_2145:ITSO_SVC2:superuser>lshost
id name port_count iogrp_count
0 Palau 2 4
1 Nile 2 1
2 Kanaga 2 1
3 Siam 2 2
4 Angola 1 4

This command renamed the host from Guinea to Angola.

Host name: The chhost command specifies the new name first. You can use letters A - Z
and a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name can be 1 - 63
characters. However, it cannot start with a number, dash, or the word “host” because this
prefix is reserved for SVC assignment only.

Hosts that require the -type parameter: If you use Hewlett-Packard UNIX (HP-UX), you
use the -type option. For more information about the hosts that require the -type
parameter, see IBM System Storage Open Software Family SAN Volume Controller: Host
Attachment Guide, SC26-7563.

9.4.4 Deleting a host


Use the rmhost command to delete a host from the SVC configuration. If your host is still
mapped to volumes and you use the -force flag, the host and all the mappings with it are
deleted. The volumes are not deleted; only the mappings to them are deleted.

The command that is shown in Example 9-35 deletes the host that is called Angola from the
SVC configuration.

Example 9-35 rmhost Angola


IBM_2145:ITSO_SVC2:superuser>rmhost Angola

Deleting a host: If any volumes are assigned to the host, you must use the -force flag, for
example, rmhost -force Angola.

9.4.5 Adding ports to a defined host


If you add an HBA or a network interface controller (NIC) to a server that is defined within the
SVC, you can use the addhostport command to add the new port definitions to your host
configuration.

516 Implementing the IBM System Storage SAN Volume Controller V7.4
If your host is connected through SAN with FC and if the WWPN is zoned to the SVC system,
issue the lshbaportcandidate command to compare with the information that you have from
the server administrator, as shown in Example 9-36.

Example 9-36 lshbaportcandidate command


IBM_2145:ITSO_SVC2:superuser>lshbaportcandidate
id
210000E08B054CAA

Use host or SAN switch utilities to verify whether the WWPN matches your information. If the
WWPN matches your information, use the addhostport command to add the port to the host,
as shown in Example 9-37.

Example 9-37 addhostport command


IBM_2145:ITSO_SVC2:superuser>addhostport -hbawwpn 210000E08B054CAA Palau

This command adds the WWPN of 210000E08B054CAA to the Palau host.

Adding multiple ports: You can add multiple ports at one time by using the separator or
colon (:) between WWPNs, as shown in the following example:
addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau

If the new HBA is not connected or zoned, the lshbaportcandidate command does not
display your WWPN. In this case, you can manually enter the WWPN of your HBA or HBAs
and use the -force flag to create the host, as shown in Example 9-38.

Example 9-38 addhostport command


IBM_2145:ITSO_SVC2:superuser>addhostport -hbawwpn 210000E08B054CAA -force Palau

This command forces the addition of the WWPN that is named 210000E08B054CAA to the host
called Palau.

WWPNs: WWPNs are not case sensitive within the CLI.

If you run the lshost command again, you can see your host with an updated port count of 2,
as shown in Example 9-39.

Example 9-39 lshost command: Port count


IBM_2145:ITSO_SVC2:superuser>lshost
id name port_count iogrp_count
0 Palau 2 4
1 ITSO_W2008 1 4
2 Thor 3 1
3 Frigg 1 1
4 Baldur 1 1

If your host uses iSCSI as a connection method, you must have the new iSCSI IQN ID before
you add the port. Unlike FC-attached hosts, you cannot check for available candidates with
iSCSI.

Chapter 9. SAN Volume Controller operations using the command-line interface 517
After you acquire the other iSCSI IQN, use the addhostport command, as shown in
Example 9-40.

Example 9-40 Adding an iSCSI port to an already configured host


IBM_2145:ITSO_SVC2:superuser>addhostport -iscsiname
iqn.1991-05.com.microsoft:baldur 4

9.4.6 Deleting ports


If you make a mistake when you are adding a port or if you remove an HBA from a server that
is defined within the SVC, you can use the rmhostport command to remove WWPN
definitions from an existing host.

Before you remove the WWPN, ensure that it is the correct WWPN by issuing the lshost
command, as shown in Example 9-41.

Example 9-41 lshost command


IBM_2145:ITSO_SVC2:superuser>lshost Palau
id 0
name Palau
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B054CAA
node_logged_in_count 2
state active
WWPN 210000E08B89C1CD
node_logged_in_count 2
state offline

When you know the WWPN or iSCSI IQN, use the rmhostport command to delete a host
port, as shown in Example 9-42.

Example 9-42 rmhostport command


For removing WWPN
IBM_2145:ITSO_SVC2:superuser>rmhostport -hbawwpn 210000E08B89C1CD Palau

Use this command to remove the iSCSI IQN:


IBM_2145:ITSO_SVC2:superuser>rmhostport -iscsiname
iqn.1991-05.com.microsoft:baldur Baldur

This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI
IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.

Removing multiple ports: You can remove multiple ports at one time by using the
separator or colon (:) between the port names, as shown in the following example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola

518 Implementing the IBM System Storage SAN Volume Controller V7.4
9.5 Working with the Ethernet port for iSCSI
In this section, we describe the commands that are used for setting, changing, and displaying
the SVC Ethernet port for iSCSI configuration.

Example 9-43 shows the lsportip command that lists the iSCSI IP addresses that are
assigned for each port on each node in the system.

Example 9-43 The lsportip command


IBM_2145:ITSO_SVC2:superuser>lsportip
id node_id node_name IP_address mask
gateway IP_address_6 prefix_6 gateway_6 MAC
duplex state speed failover
1 1 node1
00:1a:64:95:2f:cc Full unconfigured 1Gb/s no
1 1 node1
00:1a:64:95:2f:cc Full unconfigured 1Gb/s yes
2 1 node1 10.44.36.64 255.255.255.0
10.44.36.254 00:1a:64:95:2f:ce Full
online 1Gb/s no
2 1 node1
00:1a:64:95:2f:ce Full online 1Gb/s yes
1 2 node2
00:1a:64:95:3f:4c Full unconfigured 1Gb/s no
1 2 node2
00:1a:64:95:3f:4c Full unconfigured 1Gb/s yes
2 2 node2 10.44.36.65 255.255.255.0
10.44.36.254 00:1a:64:95:3f:4e Full
online 1Gb/s no
2 2 node2
00:1a:64:95:3f:4e Full online 1Gb/s yes
1 3 node3
00:21:5e:41:53:18 Full unconfigured 1Gb/s no
1 3 node3
00:21:5e:41:53:18 Full unconfigured 1Gb/s yes
2 3 node3 10.44.36.60 255.255.255.0
10.44.36.254 00:21:5e:41:53:1a
Full online 1Gb/s no
2 3 node3
00:21:5e:41:53:1a Full online 1Gb/s yes
1 4 node4
00:21:5e:41:56:8c Full unconfigured 1Gb/s no
1 4 node4
00:21:5e:41:56:8c Full unconfigured 1Gb/s yes
2 4 node4 10.44.36.63 255.255.255.0
10.44.36.254 00:21:5e:41:56:8e Full
online 1Gb/s no
2 4 node4
00:21:5e:41:56:8e Full online 1Gb/s yes

Chapter 9. SAN Volume Controller operations using the command-line interface 519
Example 9-44 shows how the cfgportip command assigns an IP address to each node
Ethernet port for iSCSI I/O.

Example 9-44 The cfgportip command


IBM_2145:ITSO_SVC2:superuser>cfgportip -node 4 -ip 10.44.36.63 -gw 10.44.36.254 -mask 255.255.255.0 2
IBM_2145:ITSO_SVC2:superuser>cfgportip -node 1 -ip 10.44.36.64 -gw 10.44.36.254 -mask 255.255.255.0 2
IBM_2145:ITSO_SVC2:superuser>cfgportip -node 2 -ip 10.44.36.65 -gw 10.44.36.254 -mask 255.255.255.0 2

9.6 Working with volumes


In this section, we describe the various configuration and administrative tasks that can be
performed on the volume within the SVC environment.

9.6.1 Creating a volume


The mkvdisk command creates sequential, striped, or image mode volume objects. When
they are mapped to a host object, these objects are seen as disk drives with which the host
can perform I/O operations.

When a volume is created, you must enter several parameters at the CLI. Mandatory and
optional parameters are available.

For more information, see Command-Line Interface User’s Guide, SC27-2287.

Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.

When you are ready to create a volume, you must know the following information before you
start to create the volume:
򐂰 In which storage pool the volume has its extents
򐂰 From which I/O Group the volume is accessed
򐂰 Which SVC node is the preferred node for the volume
򐂰 Size of the volume
򐂰 Name of the volume
򐂰 Type of the volume
򐂰 Whether this volume is managed by Easy Tier to optimize its performance

When you are ready to create your striped volume, use the mkvdisk command. In
Example 9-45, this command creates a 10 GB striped volume with volume ID 20 within the
storage pool STGPool_DS3500-2 and assigns it to the io_grp0 I/O Group. Its preferred node is
node 1.

Example 9-45 mkvdisk command


IBM_2145:ITSO_SVC2:superuser>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp io_grp0
-node 1 -size 10 -unit gb -name Tiger
Virtual Disk, id [20], successfully created

520 Implementing the IBM System Storage SAN Volume Controller V7.4
To verify the results, use the lsvdisk command, as shown in Example 9-46.

Example 9-46 lsvdisk command


IBM_2145:ITSO_SVC2:superuser>lsvdisk 20
id 20
name Tiger
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name STGPool_DS3500-2
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000016
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name STGPool_DS3500-2
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB

The required tasks to create a volume are complete.

Chapter 9. SAN Volume Controller operations using the command-line interface 521
9.6.2 Volume information
Use the lsvdisk command to display summary information about all volumes that are
defined within the SVC environment. To display more detailed information about a specific
volume, run the command again and append the volume name parameter or the volume ID.

Example 9-47 shows both of these commands.

Example 9-47 lsvdisk command


IBM_2145:ITSO_SVC2:superuser>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type
FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change
0 Volume_A 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
0 GMREL1 6005076801AF813F1000000000000031 0 1 empty 0
0 no
1 Volume_B 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
1 GMREL2 6005076801AF813F1000000000000032 0 1 empty 0
0 no
2 Volume_C 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
2 GMREL3 6005076801AF813F1000000000000033 0 1 empty 0
0 no
IBM_2145:ITSO_SVC2:superuser>lsvdisk Volume_A
id 0
name Volume_A
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool_DS3500-1
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id 0
RC_name GMREL1
vdisk_UID 6005076801AF813F1000000000000031
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool_DS3500-1
type striped

522 Implementing the IBM System Storage SAN Volume Controller V7.4
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB

9.6.3 Creating a thin-provisioned volume


Example 9-48 shows how to create a thin-provisioned volume. In addition to the normal
parameters, you must use the following parameters:
򐂰 -rsize: This parameter makes the volume a thin-provisioned volume; otherwise, the
volume is fully allocated.
򐂰 -autoexpand: This parameter specifies that thin-provisioned volume copies automatically
expand their real capacities by allocating new extents from their storage pool.
򐂰 -grainsize: This parameter sets the grain size (in KB) for a thin-provisioned volume.

Example 9-48 Usage of the command mkvdisk


IBM_2145:ITSO_SVC2:superuser>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp 0 -vtype striped
-size 10 -unit gb -rsize 50% -autoexpand -grainsize 32
Virtual Disk, id [21], successfully
created

This command creates a space-efficient 10 GB volume. The volume belongs to the storage
pool that is named STGPool_DS3500-2 and is owned by the io_grp0 I/O Group. The real
capacity automatically expands until the volume size of 10 GB is reached. The grain size is
set to 32 K, which is the default.

Disk size: When the -rsize parameter is used, you have the following options: disk_size,
disk_size_percentage, and auto.

Specify the disk_size_percentage value by using an integer, or an integer that is


immediately followed by the percent (%) symbol.

Specify the units for a disk_size integer by using the -unit parameter; the default is MB.
The -rsize value can be greater than, equal to, or less than the size of the volume.

The auto option creates a volume copy that uses the entire size of the MDisk. If you specify
the -rsize auto option, you must also specify the -vtype image option.

An entry of 1 GB uses 1,024 MB.

Chapter 9. SAN Volume Controller operations using the command-line interface 523
9.6.4 Creating a volume in image mode
This virtualization type allows an image mode volume to be created when an MDisk has data
on it, perhaps from a pre-virtualized subsystem. When an image mode volume is created, it
directly corresponds to the previously unmanaged MDisk from which it was created.
Therefore, except for a thin-provisioned image mode volume, the volume’s logical block
address (LBA) x equals MDisk LBA x.

You can use this command to bring a non-virtualized disk under the control of the clustered
system. After it is under the control of the clustered system, you can migrate the volume from
the single managed disk.

When the first MDisk extent is migrated, the volume is no longer an image mode volume. You
can add an image mode volume to an already populated storage pool with other types of
volumes, such as striped or sequential volumes.

Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That
is, the minimum size that can be specified for an image mode volume must be the same as
the storage pool extent size to which it is added, with a minimum of 16 MiB.

You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.

Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks
and the remaining space on the larger MDisk is inaccessible.

If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.

Use the mkvdisk command to create an image mode volume, as shown in Example 9-49.

Example 9-49 mkvdisk (image mode) command


IBM_2145:ITSO_SVC2:superuser>mkvdisk -mdiskgrp STGPool_DS3500-1 -iogrp 0 -mdisk mdisk10
-vtype image -name Image_Volume_A
Virtual Disk, id [22], successfully
created

This command creates an image mode volume that is called Image_Volume_A that uses the
mdisk10 MDisk. The volume belongs to the storage pool STGPool_DS3500-1 and the volume is
owned by the io_grp0 I/O Group.

If we run the lsvdisk command again, the volume that is named Image_Volume_A has a
status of image, as shown in Example 9-50.

Example 9-50 lsvdisk command


IBM_2145:ITSO_SVC2:superuser>lsvdisk -filtervalue type=image
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change
22 Image_Volume_A 0 io_grp0 online 0 STGPool_DS3500-1 10.00GB
image 6005076801AF813F1000000000000018 0 1
empty 0 no

524 Implementing the IBM System Storage SAN Volume Controller V7.4
9.6.5 Adding a mirrored volume copy
You can create a mirrored copy of a volume, which keeps a volume accessible even when the
MDisk on which it depends becomes unavailable. You can create a copy of a volume on
separate storage pools or by creating an image mode copy of the volume. Copies increase
the availability of data; however, they are not separate objects. You can create or change
mirrored copies from the volume only.

In addition, you can use volume mirroring as an alternative method of migrating volumes
between storage pools.

For example, if you have a non-mirrored volume in one storage pool and want to migrate that
volume to another storage pool, you can add a copy of the volume and specify the second
storage pool. After the copies are synchronized, you can delete the copy on the first storage
pool. The volume is copied to the second storage pool while remaining online during the copy.

To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds
a copy of the chosen volume to the selected storage pool, which changes a non-mirrored
volume into a mirrored volume.

In the following scenario, we show creating a mirrored volume from one storage pool to
another storage pool.

As you can see in Example 9-51, the volume has a copy with copy_id 0.

Example 9-51 lsvdisk command


IBM_2145:ITSO_SVC2:superuser>lsvdisk Volume_no_mirror
id 23
name Volume_no_mirror
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000019
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online

Chapter 9. SAN Volume Controller operations using the command-line interface 525
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB

In Example 9-52, we add the volume copy mirror by using the addvdiskcopy command.

Example 9-52 addvdiskcopy command


IBM_2145:ITSO_SVC2:superuser>addvdiskcopy -mdiskgrp STGPool_DS5000-1 -vtype striped -unit
gb Volume_no_mirror
Vdisk [23] copy [1] successfully created

During the synchronization process, you can see the status by using the
lsvdisksyncprogress command. As shown in Example 9-53, the first time that the status is
checked, the synchronization progress is at 48%, and the estimated completion time is
11:09:26. The second time that the command is run, the progress status is at 100%, and the
synchronization is complete.

Example 9-53 Synchronization


IBM_2145:ITSO_SVC2:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
23 Volume_no_mirror 1 48 110926203918
IBM_2145:ITSO_SVC2:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
23 Volume_no_mirror 1 100

As you can see in Example 9-54, the new mirrored volume copy (copy_id 1) was added and
can be seen by using the lsvdisk command.

Example 9-54 lsvdisk command


IBM_2145:ITSO_SVC2:superuser>lsvdisk 23
id 23
name Volume_no_mirror
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many

526 Implementing the IBM System Storage SAN Volume Controller V7.4
mdisk_grp_name many
capacity 1.00GB
type many
formatted no
mdisk_id many
mdisk_name many
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000019
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 2
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 1.00GB

copy_id 1
status online
sync yes
primary no
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB

Chapter 9. SAN Volume Controller operations using the command-line interface 527
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB

While you are adding a volume copy mirror, you can define a mirror with different parameters
to the volume copy. Therefore, you can define a thin-provisioned volume copy for a
non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned
volume to a thin-provisioned volume.

Volume copy mirror parameters: To change the parameters of a volume copy mirror, you
must delete the volume copy and redefine it with the new values.

Now, we can change the name of the volume that was mirrored from Volume_no_mirror to
Volume_mirrored, as shown in Example 9-55.

Example 9-55 Volume name changes


IBM_2145:ITSO_SVC2:superuser>chvdisk -name Volume_mirrored Volume_no_mirror

9.6.6 Splitting a mirrored volume


The splitvdiskcopy command creates a volume in the specified I/O Group from a copy of the
specified volume. If the copy that you are splitting is not synchronized, you must use the
-force parameter. The command fails if you are attempting to remove the only synchronized
copy. To avoid this failure, wait for the copy to synchronize or split the unsynchronized copy
from the volume by using the -force parameter. You can run the command when either
volume copy is offline.

Example 9-56 shows the splitvdiskcopy command, which is used to split a mirrored volume.
It creates a volume that is named Volume_new from the volume that is named
Volume_mirrored.

Example 9-56 Split volume


IBM_2145:ITSO_SVC2:superuser>splitvdiskcopy -copy 1 -iogrp 0 -name Volume_new
Volume_mirrored
Virtual Disk, id [24], successfully created

As you can see in Example 9-57 on page 529, the new volume that is named Volume_new was
created as an independent volume.

528 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-57 lsvdisk command
IBM_2145:ITSO_SVC2:superuser>lsvdisk Volume_new
id 24
name Volume_new
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001A
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB

Chapter 9. SAN Volume Controller operations using the command-line interface 529
By issuing the command that is shown in Example 9-56 on page 528, Volume_mirrored does
not have its mirrored copy and a volume is created automatically.

9.6.7 Modifying a volume


Running the chvdisk command modifies a single property of a volume. Only one property
can be modified at a time. Therefore, changing the name and modifying the I/O Group require
two invocations of the command.

You can specify a new name or label. The new name can be used to reference the volume.
The I/O Group with which this volume is associated can be changed. Changing the I/O Group
with which this volume is associated requires a flush of the cache within the nodes in the
current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host
level before you perform this operation.

Tips: If the volume has a mapping to any hosts, it is impossible to move the volume to an
I/O Group that does not include any of those hosts.

This operation fails if insufficient space exists to allocate bitmaps for a mirrored volume in
the target I/O Group.

If the -force parameter is used and the system is unable to destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.

If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.

9.6.8 I/O governing


You can set a limit on the number of I/O operations that are accepted for a volume. The limit is
set in terms of I/Os per second or MB per second. By default, no I/O governing rate is set
when a volume is created.

Base the choice between I/O and MB as the I/O governing throttle on the disk access profile
of the application. Database applications generally issue large amounts of I/O, but they
transfer only a relatively small amount of data. In this case, setting an I/O governing throttle
that is based on MB per second does not achieve much. It is better to use an I/Os per second
as a second throttle.

At the other extreme, a streaming video application generally issues a small amount of I/O,
but it transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle that is based on I/Os per second does not achieve much, so it is better to
use an MB per second throttle.

I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of
the lsvdisk command) does not mean that zero I/Os per second (or MB per second) can
be achieved. It means that no throttle is set.

An example of the chvdisk command is shown in Example 9-58 on page 531.

530 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-58 chvdisk command
IBM_2145:ITSO_SVC2:superuser>chvdisk -rate 20 -unitmb volume_7
IBM_2145:ITSO_SVC2:superuser>chvdisk -warning 85% volume_7

New name first: The chvdisk command specifies the new name first. The name can
consist of letters A - Z and a - z, numbers 0 - 9, the dash (-), and the underscore (_). It can
be 1 - 63 characters. However, it cannot start with a number, dash, or the word “vdisk”
because this prefix is reserved for SVC assignment only.

The first command changes the volume throttling of volume_7 to 20 MBps. The second
command changes the thin-provisioned volume warning to 85%. To verify the changes, issue
the lsvdisk command, as shown in Example 9-59.

Example 9-59 lsvdisk command: Verifying throttling


IBM_2145:ITSO_SVC2:superuser>lsvdisk volume_7
id 1
name volume_7
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001F
virtual_disk_throttling (MB) 20
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496

Chapter 9. SAN Volume Controller operations using the command-line interface 531
autoexpand on
warning 85
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
2.02GB

9.6.9 Deleting a volume


When this command is run on an existing fully managed mode volume, any data that
remained on it is lost. The extents that made up this volume are returned to the pool of free
extents that are available in the storage pool.

If any remote copy, FlashCopy, or host mappings still exist for this volume, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the volume and any
volume to host mappings and copy mappings.

If the volume is the subject of a “migrate to image mode” process, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the volume.

If the command succeeds (without the -force flag) for an image mode volume, the underlying
back-end controller logical unit is consistent with the data that a host might previously read
from the image mode volume. That is, all fast write data was flushed to the underlying LUN. If
the -force flag is used, consistency is not guaranteed.

If any non-destaged data exists in the fast write cache for this volume, the deletion of the
volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write
cache is deleted.

Use the rmvdisk command to delete a volume from your SVC configuration, as shown in
Example 9-60.

Example 9-60 rmvdisk command


IBM_2145:ITSO_SVC2:superuser>rmvdisk volume_A

This command deletes the volume_A volume from the SVC configuration. If the volume is
assigned to a host, you must use the -force flag to delete the volume, as shown in
Example 9-61.

Example 9-61 rmvdisk -force command


IBM_2145:ITSO_SVC2:superuser>rmvdisk -force volume_A

9.6.10 Expanding a volume


Expanding a volume presents a larger capacity disk to your operating system. Although this
expansion can be easily performed by using the SVC, you must ensure that your operating
systems support expansion before this function is used.

532 Implementing the IBM System Storage SAN Volume Controller V7.4
Assuming that your operating systems support expansion, you can use the expandvdisksize
command to increase the capacity of a volume, as shown in Example 9-62.

Example 9-62 expandvdisksize command


IBM_2145:ITSO_SVC2:superuser>expandvdisksize -size 5 -unit gb volume_C

This command expands the volume_C volume (which was 35 GB) by another 5 GB to give it a
total size of 40 GB.

To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 9-63. This command changes the real size of the volume_B volume to a real capacity
of 55 GB. The capacity of the volume is unchanged.

Example 9-63 lsvdisk


IBM_2145:ITSO_SVC2:superuser>lsvdisk volume_B
id 26
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 50.02GB
free_capacity 50.02GB
overallocation 199
autoexpand on
warning 80
grainsize 32
se_copy yes
IBM_2145:ITSO_SVC2:superuser>expandvdisksize -rsize 5 -unit gb volume_B
IBM_2145:ITSO_SVC2:superuser>lsvdisk volume_B
id 26
name volume_B
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 55.02GB
free_capacity 55.02GB
overallocation 181
autoexpand on
warning 80
grainsize 32
se_copy yes

Important: If a volume is expanded, its type becomes striped even if it was previously
sequential or in image mode. If enough extents are not available to expand your volume to
the specified size, you receive the following error message:
CMMVC5860E Ic_failed_vg_insufficient_virtual_extents

Chapter 9. SAN Volume Controller operations using the command-line interface 533
9.6.11 Assigning a volume to a host
Use the mkvdiskhostmap command to map a volume to a host. When run, this command
creates a mapping between the volume and the specified host, which presents this volume to
the host as though the disk was directly attached to the host. It is only after this command is
run that the host can perform I/O to the volume. Optionally, a SCSI LUN ID can be assigned to
the mapping.

When the HBA on the host scans for devices that are attached to it, the HBA discovers all of
the volumes that are mapped to its FC ports. When the devices are found, each one is
allocated an identifier (SCSI LUN ID).

For example, the first disk that is found is generally SCSI LUN 1. You can control the order in
which the HBA discovers volumes by assigning the SCSI LUN ID, as required. If you do not
specify a SCSI LUN ID, the system automatically assigns the next available SCSI LUN ID,
based on any mappings that exist with that host.

By using the volume and host definition that we created in the previous sections, we assign
volumes to hosts that are ready for their use. We use the mkvdiskhostmap command, as
shown in Example 9-64.

Example 9-64 mkvdiskhostmap command


IBM_2145:ITSO_SVC2:superuser>mkvdiskhostmap -host Almaden volume_B
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO_SVC2:superuser>mkvdiskhostmap -host Almaden volume_C
Virtual Disk to Host map, id [1], successfully
created

This command displays volume_B and volume_C that are assigned to host Almaden, as shown
in Example 9-65.

Example 9-65 lshostvdiskmap -delim command


IBM_2145:ITSO_SVC2:superuser>lshostvdiskmap -delim :
id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID
2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020
2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021

Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can
help assign a specific LUN ID to a volume that is to be associated with a host. The default
(if nothing is specified) is to increment based on what is already assigned to the host.

Certain HBA device drivers stop when they find a gap in the SCSI LUN IDs, as shown in the
following examples:
򐂰 Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
򐂰 Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
򐂰 Volume 3 is mapped to Host 1 with SCSI LUN ID 4.

When the device driver scans the HBA, it might stop after discovering volumes 1 and 2
because no SCSI LUN is mapped with ID 3.

Important: Ensure that the SCSI LUN ID allocation is contiguous.

534 Implementing the IBM System Storage SAN Volume Controller V7.4
It is not possible to map a volume to a host more than one time at separate LUNs
(Example 9-66).

Example 9-66 mkvdiskhostmap command


IBM_2145:ITSO_SVC2:superuser>mkvdiskhostmap -host Siam volume_A
Virtual Disk to Host map, id [0], successfully created

This command maps the volume that is called volume_A to the host that is called Siam.

All tasks that are required to assign a volume to an attached host are complete.

9.6.12 Showing volumes to host mapping


Use the lshostvdiskmap command to show the volumes that are assigned to a specific host,
as shown in Example 9-67.

Example 9-67 lshostvdiskmap command


IBM_2145:ITSO_SVC2:superuser>lshostvdiskmap -delim , Siam
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C

From this command, you can see that the host Siam has only one assigned volume that is
called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is
presented to the host. If no host is specified, all defined host-to-volume mappings are
returned.

Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, in this case, you must specify this flag before the host
name. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.

9.6.13 Deleting a volume to host mapping


When you are deleting a volume mapping, you are not deleting the volume, only the
connection from the host to the volume. If you mapped a volume to a host by mistake or you
want to reassign the volume to another host, use the rmvdiskhostmap command to unmap a
volume from a host, as shown in Example 9-68.

Example 9-68 rmvdiskhostmap command


IBM_2145:ITSO_SVC2:superuser>rmvdiskhostmap -host Tiger volume_D

This command unmaps the volume that is called volume_D from the host that is called Tiger.

Chapter 9. SAN Volume Controller operations using the command-line interface 535
9.6.14 Migrating a volume
You might want to migrate volumes from one set of MDisks to another set of MDisks to
decommission an old disk subsystem to have better balanced performance across your
virtualized environment, or to migrate data into the SVC environment transparently by using
image mode. For more information about migration, see Chapter 6, “Data migration” on
page 241.

Important: After migration is started, it continues until completion unless it is stopped or


suspended by an error condition or the volume that is being migrated is deleted.

As you can see from the parameters that are shown in Example 9-69, before you can migrate
your volume, you must know the name of the volume that you want to migrate and the name
of the storage pool to which you want to migrate it. To discover the names, run the lsvdisk
and lsmdiskgrp commands.

After you know these details, you can run the migratevdisk command, as shown in
Example 9-69.

Example 9-69 migratevdisk command


IBM_2145:ITSO_SVC2:superuser>migratevdisk -mdiskgrp STGPool_DS5000-1 -vdisk
volume_C

This command moves volume_C to the storage pool named STGPool_DS5000-1.

Tips: If insufficient extents are available within your target storage pool, you receive an
error message. Ensure that the source MDisk group and target MDisk group have the
same extent size.

By using the optional threads parameter, you can assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.

You can run the lsmigrate command at any time to see the status of the migration process,
as shown in Example 9-70.

Example 9-70 lsmigrate command


IBM_2145:ITSO_SVC2:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0

IBM_2145:ITSO_SVC2:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id
0

536 Implementing the IBM System Storage SAN Volume Controller V7.4
Progress: The progress is shown as percent complete. If you receive no more replies, it
means that the process finished.

9.6.15 Migrating a fully managed volume to an image mode volume


Migrating a fully managed volume to an image mode volume allows the SVC to be removed
from the data path, which might be useful where the SVC is used as a data mover appliance.
You can use the migratetoimage command.

To migrate a fully managed volume to an image mode volume, the following rules apply:
򐂰 The destination MDisk must be greater than or equal to the size of the volume.
򐂰 The MDisk that is specified as the target must be in an unmanaged state.
򐂰 Regardless of the mode in which the volume starts, it is reported as a managed mode
during the migration.
򐂰 Both of the MDisks that are involved are reported as being image mode volumes during
the migration.
򐂰 If the migration is interrupted by a system recovery or cache problem, the migration
resumes after the recovery completes.

Example 9-71 shows an example of the command.

Example 9-71 migratetoimage command


IBM_2145:ITSO_SVC2:superuser>migratetoimage -vdisk volume_A -mdisk mdisk10 -mdiskgrp
STGPool_IMAGE

In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put
into the STGPool_IMAGE storage pool.

9.6.16 Shrinking a volume


The shrinkvdisksize command reduces the capacity that is allocated to the particular
volume by the amount that you specify. You cannot shrink the real size of a thin-provisioned
volume to less than its used size. All capacities (including changes) must be in multiples of
512 bytes. An entire extent is reserved even if it is only partially used. The default capacity
units are MBs.

You can use this command to shrink the physical capacity that is allocated to a particular
volume by the specified amount. You also can use this command to shrink the virtual capacity
of a thin-provisioned volume without altering the physical capacity that is assigned to the
volume. Use the following parameters:
򐂰 For a non-thin-provisioned volume, use the -size parameter.
򐂰 For a thin-provisioned volume’s real capacity, use the -rsize parameter.
򐂰 For the thin-provisioned volume’s virtual capacity, use the -size parameter.

When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.

Chapter 9. SAN Volume Controller operations using the command-line interface 537
The system arbitrarily reduces the capacity of the volume by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the volume. You cannot
control which extents are removed; therefore, you cannot assume that it is unused space that
is removed.

Image mode volumes cannot be reduced in size. Instead, they must first be migrated to fully
managed mode. To run the shrinkvdisksize command on a mirrored volume, all copies of
the volume must be synchronized.

Important: Consider the following guidelines when you are shrinking a disk:
򐂰 If the volume contains data, do not shrink the disk.
򐂰 Certain operating systems or file systems use the outer edge of the disk for
performance reasons. This command can shrink a FlashCopy target volume to the
same capacity as the source.
򐂰 Before you shrink a volume, validate that the volume is not mapped to any host objects.
If the volume is mapped, data is displayed. You can determine the exact capacity of the
source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the volume by the required amount by issuing the shrinkvdisksize -size
disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.

Assuming that your operating system supports it, you can use the shrinkvdisksize command
to decrease the capacity of a volume, as shown in Example 9-72.

Example 9-72 shrinkvdisksize command


IBM_2145:ITSO_SVC2:superuser>shrinkvdisksize -size 44 -unit gb volume_D

This command shrinks a volume that is called volume_D from a total size of 80 GB by 44 GB,
to a new total size of 36 GB.

9.6.17 Showing a volume on an MDisk


Use the lsmdiskmember command to display information about the volume that is using
space on a specific MDisk, as shown in Example 9-73.

Example 9-73 lsmdiskmember command


IBM_2145:ITSO_SVC2:superuser>lsmdiskmember mdisk8
id copy_id
24 0
27 0

This command displays a list of all of the volume IDs that correspond to the volume copies
that use mdisk8.

To correlate the IDs that are displayed in this output to volume names, we can run the
lsvdisk command. For more information, see 9.6, “Working with volumes” on page 520.

9.6.18 Showing which volumes are using a storage pool


Use the lsvdisk -filtervalue command to see which volumes are part of a specific storage
pool, as shown in Example 9-74 on page 539. This command shows all of the volumes that
are part of the storage pool that is named STGPool_DS3500-2.

538 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-74 lsvdisk -filtervalue: VDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC2:superuser>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,0,no
9,W2K3_SRV2_VOL03,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
00000000000000A,0,1,empty,0,0,no
10,W2K3_SRV2_VOL04,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000B,0,1,empty,0,0,no
11,W2K3_SRV2_VOL05,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000C,0,1,empty,0,0,no
12,W2K3_SRV2_VOL06,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000D,0,1,empty,0,0,no
16,AIX_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,20.00GB,striped,,,,,6005076801AF813F1
000000000000011,0,1,empty,0,0,no

9.6.19 Showing which MDisks are used by a specific volume


Use the lsvdiskmember command to show from which MDisks a specific volume’s extents
came, as shown in Example 9-75.

Example 9-75 lsvdiskmember command


IBM_2145:ITSO_SVC2:superuser>lsvdiskmember 0
id
4
5
6
7

If you want to know more about these MDisks, you can run the lsmdisk command, as
described in 9.2, “New commands and functions” on page 498 (by using the ID that is
displayed in Example 9-75 rather than the name).

9.6.20 Showing from which storage pool a volume has its extents
Use the lsvdisk command to show to which storage pool a specific volume belongs, as
shown in Example 9-76.

Example 9-76 lsvdisk command: Storage pool name


IBM_2145:ITSO_SVC2:superuser>lsvdisk Volume_D
id 25
name Volume_D
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 10.00GB
type striped
formatted no
mdisk_id

Chapter 9. SAN Volume Controller operations using the command-line interface 539
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001E
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 2.02GB

To learn more about these storage pools, you can run the lsmdiskgrp command, as
described in 9.3.10, “Working with a storage pool” on page 508.

9.6.21 Showing the host to which the volume is mapped


To show the hosts to which a specific volume was assigned, run the lsvdiskhostmap
command, as shown in Example 9-77.

Example 9-77 lsvdiskhostmap command


IBM_2145:ITSO_SVC2:superuser>lsvdiskhostmap -delim , volume_B
id,name,SCSI_id,host_id,host_name,vdisk_UID
26,volume_B,0,2,Almaden,6005076801AF813F1000000000000020

540 Implementing the IBM System Storage SAN Volume Controller V7.4
This command shows the host or hosts to which the volume_B volume was mapped. Duplicate
entries are normal because more paths exist between the clustered system and the host. To
be sure that the operating system on the host sees the disk only one time, you must install
and configure a multipath software application, such as IBM Subsystem Driver (SDD).

Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.

9.6.22 Showing the volume to which the host is mapped


To show the volume to which a specific host was assigned, run the lshostvdiskmap
command, as shown in Example 9-78.

Example 9-78 lshostvdiskmap command example


IBM_2145:ITSO_SVC2:superuser>lshostvdiskmap -delim , Almaden
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID
2,Almaden,0,26,volume_B,60050768018301BF2800000000000005
2,Almaden,1,27,volume_A,60050768018301BF2800000000000004

This command shows which volumes are mapped to the host called Almaden.

Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.

9.6.23 Tracing a volume from a host back to its physical disk


In many cases, you must verify exactly which physical disk is presented to the host, for
example, from which storage pool a specific volume comes. However, from the host side, it is
not possible for the server administrator who is using the GUI to see on which physical disks
the volumes are running.

Instead, you must enter the command that is shown in Example 9-79 from your multipath
command prompt.

Complete the following steps:


1. On your host, run the datapath query device command. You see a long disk serial
number for each vpath device, as shown in Example 9-79.

Example 9-79 datapath query device command


DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000005
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0
1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050768018301BF2800000000000004
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0

Chapter 9. SAN Volume Controller operations using the command-line interface 541
1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050768018301BF2800000000000006
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0
1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

State: In Example 9-79, the state of each path is OPEN. Sometimes, the state is
CLOSED. This state does not necessarily indicate a problem because it might be a
result of the path’s processing stage.

2. Run the lshostvdiskmap command to return a list of all assigned volumes, as shown in
Example 9-80.

Example 9-80 lshostvdiskmap command


IBM_2145:ITSO_SVC2:superuser>lshostvdiskmap -delim , Almaden
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID
2,Almaden,0,26,volume_B,60050768018301BF2800000000000005
2,Almaden,1,27,volume_A,60050768018301BF2800000000000004
2,Almaden,2,28,volume_C,60050768018301BF2800000000000006

Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as Almaden.
3. Run the lsvdiskmember vdiskname command for the MDisk or a list of the MDisks that
make up the specified volume, as shown in Example 9-81.

Example 9-81 lsvdiskmember command


IBM_2145:ITSO_SVC2:superuser>lsvdiskmember volume_E
id
0
1
2
3
4
10
11
13
15
16
17

4. Query the MDisks with the lsmdisk mdiskID command to discover their controller and
LUN information, as shown in Example 9-82. The output displays the controller name and
the controller LUN ID to help you to track back to a LUN within the disk subsystem (if you
gave your controller a unique name, such as a serial number). See Example 9-82.

Example 9-82 lsmdisk command


IBM_2145:ITSO_SVC2:superuser>lsmdisk 0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1

542 Implementing the IBM System Storage SAN Volume Controller V7.4
capacity 128.0GB
quorum_index 1
block_size 512
controller_name ITSO-DS3500
ctrl_type 4
ctrl_WWNN 20080080E51B09E8
controller_id 2
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000
preferred_WWPN 20580080E51B09E8
active_WWPN 20580080E51B09E8
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd

9.7 Scripting under the CLI for SAN Volume Controller task
automation

Command prefix changes: The svctask and svcinfo command prefixes are no longer
necessary when a command is run. If you have existing scripts that use those prefixes,
they continue to function. You do not need to change the scripts.

The use of scripting constructs works better for the automation of regular operational jobs.
You can use available shells to develop scripts. Scripting enhances the productivity of SVC
administrators and the integration of their storage virtualization environment. You can create
your own customized scripts to automate many tasks for completion at various times and run
them through the CLI.

We suggest that you keep the scripting as simple as possible in large SAN environments
where scripting commands are used. It is harder to manage fallback, documentation, and the
verification of a successful script before execution in a large SAN environment.

In this section, we present an overview of how to automate various tasks by creating scripts
by using the SVC CLI.

9.7.1 Scripting structure


When you create scripts to automate the tasks on the SVC, use the structure that is shown in
Figure 9-2 on page 544.

Chapter 9. SAN Volume Controller operations using the command-line interface 543
C re a te
c o n n e c tio n
(S S H ) to th e
SVC

S c h e d u le d
a c tiv a t io n
R u n th e or
ccoommmmaanndds M anual
a c tiv a t io n

P e r fo r m
lo g g in g

Figure 9-2 Scripting structure for SVC task automation

Creating a Secure Shell connection to the SAN Volume Controller

Secure Shell Key: The use of a Secure Shell (SSH) key is optional. (You can use a user
ID and password to access the system.) However, we suggest the use of an SSH key for
security reasons. We provide a sample of its use in this section.

When you create a connection to the SVC, you must have access to a public key that
corresponds to a public key that was previously uploaded to the SVC if you are running a
script.

The key is used to establish the SSH connection that is needed to use the CLI on the SVC. If
the SSH keypair is generated without a passphrase, you can connect without the need of
special scripting to parse in the passphrase.

On UNIX systems, you can use the ssh command to create an SSH connection with the SVC.
On Windows systems, you can use a utility that is called plink.exe (which is provided with the
PuTTY tool) to create an SSH connection with the SVC. In the following examples, we use
plink to create the SSH connection to the SVC.

Running the commands


For more information about the correct syntax and an explanation of each command when
you are using the CLI, see IBM System Storage SAN Volume Controller Command-Line
Interface User’s Guide, GC27-2287, which is available at this website:
http://www.ibm.com/support/entry/portal/support?brandind=System%20Storage~Storage%
20software~Storage%20virtualization

When you use the CLI, not all commands provide a response to determine the status of the
started command. Therefore, always create checks that can be logged for monitoring and
troubleshooting purposes.

544 Implementing the IBM System Storage SAN Volume Controller V7.4
Connecting to the SAN Volume Controller by using a predefined SSH
connection
The easiest way to create an SSH connection to the SVC is when plink can call a predefined
PuTTY session.

Define a session, including the following information:


򐂰 The auto-login user name. Set the auto-login username to your SVC superuser user name
(for example, superuser). Set this parameter by clicking Connection → Data, as shown in
Figure 9-3.

Figure 9-3 Auto-login configuration

򐂰 The private key for authentication (for example, icat.ppk). This key is the private key that
you created. Set this parameter by clicking Connection → Session → Auth, as shown in
Figure 9-4 on page 546.

Chapter 9. SAN Volume Controller operations using the command-line interface 545
Figure 9-4 An ssh private key configuration

򐂰 The IP address of the SVC clustered system. Set this parameter by clicking Session, as
shown in Figure 9-5.

Figure 9-5 IP address

Enter the following information:


– A session name. Our example uses ITSO_SVC1.
– Our PuTTY version is 0.63.
– To use this predefined PuTTY session, use the following syntax:
plink ITSO_SVC2

546 Implementing the IBM System Storage SAN Volume Controller V7.4
– If a predefined PuTTY session is not used, use the following syntax:
plink superuser@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK"

IBM provides a suite of scripting tools that are based on Perl. You can download these
scripting tools from this website:
http://www.alphaworks.ibm.com/tech/svctools

9.8 SAN Volume Controller advanced operations by using the


CLI
In the following sections, we describe the commands that we think best represent advanced
operational commands.

Important command prefix changes: The svctask and svcinfo command prefixes are
no longer necessary when you are running a command. If you have existing scripts that
use those prefixes, they continue to function. You do not need to change the scripts.

9.8.1 Command syntax


The following major command sets are available:
򐂰 By using the svcinfo command, you can query the various components within the SVC
environment.
򐂰 By using the svctask command, you can change the various components within the SVC.

When the command syntax is shown, you see several parameters in square brackets, for
example, [parameter]. The square brackets indicate that the parameter is optional in most if
not all instances. Any parameter that is not in square brackets is required information. You
can view the syntax of a command by entering one of the following commands:
򐂰 svcinfo -? shows a complete list of information commands.
򐂰 svctask -? shows a complete list of task commands.
򐂰 svcinfo commandname -? shows the syntax of information commands.
򐂰 svctask commandname -? shows the syntax of task commands.
򐂰 svcinfo commandname -filtervalue? shows the filters that you can use to reduce the
output of the information commands.

Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname
-h.

If you review the syntax of the command by entering svcinfo command name -?, you often see
-filter listed as a parameter. The correct parameter is -filtervalue.

Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were issued recently. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.

Chapter 9. SAN Volume Controller operations using the command-line interface 547
9.8.2 Organizing on window content
There are instances in which the output of a command can be long and difficult to read in the
window. If you need information about a subset of the total number of available items, you can
use filtering to reduce the output to a more manageable size.

Filtering
To reduce the output that is displayed by a command, you can specify a number of filters,
depending on the command that you are running. To see which filters are available, enter the
command followed by the -filtervalue? flag, as shown in Example 9-83.

Example 9-83 lsvdisk -filtervalue? command


IBM_2145:ITSO_SVC2:superuser>lsvdisk -filtervalue?

Filters for this view are:


name
id
IO_group_id
IO_group_name
status
mdisk_grp_name
mdisk_grp_id
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_name
vdisk_id

vdisk_UID
fc_map_count
copy_count
fast_write_state
se_copy_count
filesystem
preferred_node_id
mirror_write_priority
RC_flash

When you know the filters, you can be more selective in generating output. Consider the
following points:
򐂰 Multiple filters can be combined to create specific searches.
򐂰 You can use an asterisk (*) as a wildcard when names are used.
򐂰 When capacity is used, the units must also be specified by using -u b | kb | mb | gb | tb |
pb.

For example, if we run the lsvdisk command with no filters but with the -delim parameter, we
see the output that is shown in Example 9-84 on page 549.

548 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-84 lsvdisk command: No filters
IBM_2145:ITSO_SVC2:superuser>lsvdisk -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
0,ESXI_SRV1_VOL01,1,io_grp1,online,many,many,100.00GB,many,,,,,6005076801AF813F100000000000
0014,0,2,empty,0,no
1,volume_7,0,io_grp0,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F10000000
0000001F,0,1,empty,1,no
2,W2K3_SRV1_VOL02,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000003,0,1,empty,0,no
3,W2K3_SRV1_VOL03,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000004,0,1,empty,0,no
4,W2K3_SRV1_VOL04,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000005,0,1,empty,0,no
5,W2K3_SRV1_VOL05,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000006,0,1,empty,0,no
6,W2K3_SRV1_VOL06,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000007,0,1,empty,0,no
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,no

Tip: The -delim parameter truncates the content in the window and separates data fields
with colons as opposed to wrapping text over multiple lines. This parameter is often used if
you must get reports during script execution.

If we now add a filter (mdisk_grp_name) to our lsvdisk command, we can reduce the output,
as shown in Example 9-85.

Example 9-85 lsvdisk command: With a filter


IBM_2145:ITSO_SVC2:superuser>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,no

9.9 Managing the clustered system by using the CLI


In the following sections, we describe how to perform system administration.

Chapter 9. SAN Volume Controller operations using the command-line interface 549
9.9.1 Viewing clustered system properties

Important changes: The following changes were made since SVC 6.3:
򐂰 The svcinfo lscluster command was changed to lssystem.
򐂰 The svctask chcluster command was changed to chsystem, and several optional
parameters were moved to new commands. For example, to change the IP address of
the system, you can now use the chsystemip command. All of the old commands are
maintained for compatibility.

Use the lssystem command to display summary information about the clustered system, as
shown in Example 9-86.

Example 9-86 lssystem command


IBM_2145:ITSO_SVC2:superuser>lssystem
id 000002007F600A10
name ITSO_SVC2
location local
partnership
total_mdisk_capacity 825.0GB
space_in_mdisk_grps 571.0GB
space_allocated_to_vdisks 75.05GB
total_free_space 750.0GB
total_vdiskcopy_capacity 85.00GB
total_used_capacity 75.00GB
total_overallocation 10
total_vdisk_capacity 75.00GB
total_allocated_extent_capacity 81.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.4.0.0 (build 103.11.1410200000)
console_IP 10.18.228.140:443
id_alias 000002007F600A10
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply validmailadress@ibm.com
email_contact Support team
email_contact_primary 123456789
email_contact_alternate 123456789
email_contact_location IBM
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 7
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name

550 Implementing the IBM System Storage SAN Volume Controller V7.4
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 50
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 571.00GB
tier_free_capacity 493.00GB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization IBM
email_machine_address Street
email_machine_city City
email_machine_state CA
email_machine_zip 99999
email_machine_country CA
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask 11111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method none
vdisk_protection_time 60
vdisk_protection_enabled yes
product_name IBM SAN Volume Controller

Use the lssystemstats command to display the most recent values of all node statistics
across all nodes in a clustered system, as shown in Example 9-87.

Example 9-87 lssystemstats command


IBM_2145:ITSO_SVC2:superuser>lssystemstats
stat_name stat_current stat_peak stat_peak_time
cpu_pc 1 1 110927162859
fc_mb 0 0 110927162859
fc_io 7091 7314 110927162524
sas_mb 0 0 110927162859
sas_io 0 0 110927162859
iscsi_mb 0 0 110927162859
iscsi_io 0 0 110927162859
write_cache_pc 0 0 110927162859
total_cache_pc 0 0 110927162859
vdisk_mb 0 0 110927162859
vdisk_io 0 0 110927162859
vdisk_ms 0 0 110927162859
mdisk_mb 0 0 110927162859
mdisk_io 0 0 110927162859
mdisk_ms 0 0 110927162859
drive_mb 0 0 110927162859

Chapter 9. SAN Volume Controller operations using the command-line interface 551
drive_io 0 0 110927162859
drive_ms 0 0 110927162859
vdisk_r_mb 0 0 110927162859
vdisk_r_io 0 0 110927162859
vdisk_r_ms 0 0 110927162859
vdisk_w_mb 0 0 110927162859
vdisk_w_io 0 0 110927162859
vdisk_w_ms 0 0 110927162859
mdisk_r_mb 0 0 110927162859
mdisk_r_io 0 0 110927162859
mdisk_r_ms 0 0 110927162859
mdisk_w_mb 0 0 110927162859
mdisk_w_io 0 0 110927162859
mdisk_w_ms 0 0 110927162859
drive_r_mb 0 0 110927162859
drive_r_io 0 0 110927162859
drive_r_ms 0 0 110927162859
drive_w_mb 0 0 110927162859
drive_w_io 0 0 110927162859
drive_w_ms 0 0 110927162859

9.9.2 Changing system settings


Use the chsystem command to change the settings of the system. This command modifies
specific features of a clustered system. You can change multiple features by issuing a single
command.

All command parameters are optional; however, you must specify at least one parameter.

Important: Changing the speed on a running system breaks I/O service to the attached
hosts. Before the fabric speed is changed, stop the I/O from the active hosts and force
these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by
removing drive letters (for Windows host types). You might need to reboot specific hosts to
detect the new fabric speed.

Example 9-88 shows configuring the Network Time Protocol (NTP) IP address.

Example 9-88 chsystem command


IBM_2145:ITSO_SVC2:superuser>chsystem -ntpip 10.200.80.1

9.9.3 iSCSI configuration


SVC 5.1 introduced the IP-based Small Computer System Interface (iSCSI) as a supported
method of communication between the SVC and hosts. All back-end storage and intracluster
communication still use FC and the SAN; therefore, iSCSI cannot be used for that type of
communication.

For more information about how iSCSI works, see Chapter 2, “IBM SAN Volume Controller”
on page 9. In this section, we show how we configured our system for use with iSCSI.

We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to
contain the clustered system IP. When we configured our nodes to be used with iSCSI, we did
not affect our clustered system IP. The clustered system IP is changed, as described in 9.9.2,
“Changing system settings” on page 552.

552 Implementing the IBM System Storage SAN Volume Controller V7.4
Important: You can have more than a one IP address to one physical connection
relationship. We can have a four-to-one relationship (4:1), which consists of two IPv4
addresses plus two IPv6 addresses (four total) to one physical connection per port per
node.

Tip: When you are reconfiguring IP ports, be aware that configured iSCSI connections
must reconnect if changes are made to the IP addresses of the nodes.

Two methods are available to perform iSCSI authentication or Challenge Handshake


Authentication Protocol (CHAP): for the whole clustered system or per host connection.
Example 9-89 shows configuring CHAP for the whole clustered system.

Example 9-89 Setting a CHAP secret for the entire clustered system to passw0rd
IBM_2145:ITSO_SVC2:superuser>chsystem -iscsiauthmethod chap -chapsecret passw0rd

In our scenario, our clustered system IP address is 9.64.210.64, which is not affected during
our configuration of the node’s IP addresses.

We start by listing our ports by using the lsportip command (not shown). We see that we
have two ports per node with which to work. Both ports can have two IP addresses that can
be used for iSCSI.

We configure the secondary port in both nodes in our I/O Group, as shown in Example 9-90.

Example 9-90 Configuring the secondary Ethernet port on both SVC nodes
IBM_2145:ITSO_SVC2:superuser>cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask 255.255.255.0 2
IBM_2145:ITSO_SVC2:superuser>cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask 255.255.255.0 2

While both nodes are online, each node is available to iSCSI hosts on the IP address that we
configured. iSCSI failover between nodes is enabled automatically. Therefore, if a node goes
offline for any reason, its partner node in the I/O Group becomes available on the failed
node’s port IP address. This design ensures that hosts can continue to perform I/O. The
lsportip command displays the port IP addresses that are active on each node.

9.9.4 Modifying IP addresses


We can use both IP ports of the nodes. However, all IP information is required the first time
that you configure a second port because port 1 on the system must always have one stack
that is fully configured.

Now, two active system ports are on the configuration node. If the system IP address is
changed, the open command-line shell closes during the processing of the command. You
must reconnect to the new IP address if connected through that port.

If the clustered system IP address is changed, the open command-line shell closes during the
processing of the command and you must reconnect to the new IP address. If this node
cannot rejoin the clustered system, you can start the node in service mode. In this mode, the
node can be accessed as a stand-alone node by using the service IP address.

For more information about the service IP address, see 9.20, “Working with the Service
Assistant menu” on page 651.

Chapter 9. SAN Volume Controller operations using the command-line interface 553
List the IP addresses of the clustered system by issuing the lssystemip command, as shown
in Example 9-91.

Example 9-91 lssystemip command


IIBM_2145:ITSO_SVC2:superuser>lssystemip
cluster_id cluster_name location port_id IP_address subnet_mask gateway
IP_address_6 prefix_6 gateway_6
000002007F600A10 ITSO_SVC2 local 1 10.18.228.140 255.255.255.0
10.18.228.1 0000:0000:0000:0000:0000:ffff:0a12:e48c 24
0000:0000:0000:0000:0000:ffff:0a12:e401
000002007F600A10 ITSO_SVC2 local 2

Modify the IP address by running the chsystemip command. You can specify a static IP
address or have the system assign a dynamic IP address, as shown in Example 9-92.

Example 9-92 chsystemip -systemip


IBM_2145:ITSO_SVC2:superuser>chsystemip -systemip 10.20.133.5 -gw 10.20.135.1
-mask 255.255.255.0 -port 1

This command changes the IP address of the clustered system to 10.20.133.5.

Important: If you specify a new system IP address, the existing communication with the
system through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address, but your SSH key
still works.

List the IP service addresses of the clustered system by running the lsserviceip command.

9.9.5 Supported IP address formats


Table 9-1 lists the IP address formats.

Table 9-1 ip_address_list formats


IP type ip_address_list format

IPv4 (no port set, so SVC uses the default) 1.2.3.4

IPv4 with specific port 1.2.3.4:22

Full IPv6, default port 1234:1234:0001:0123:1234:1234:1234:1234

Full IPv6, default port, leading zeros suppressed 1234:1234:1:123:1234:1234:1234:1234

Full IPv6 with port [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23

Zero-compressed IPv6, default port 2002::4ff6

Zero-compressed IPv6 with port [2002::4ff6]:23

The required tasks to change the IP addresses of the clustered system are complete.

554 Implementing the IBM System Storage SAN Volume Controller V7.4
9.9.6 Setting the clustered system time zone and time
Use the -timezone parameter to specify the numeric ID of the time zone that you want to set.
Run the lstimezones command to list the time zones that are available on the system. This
command displays a list of valid time zone settings.

Tip: If you changed the time zone, you must clear the event log dump directory before you
can view the event log through the web application.

Setting the clustered system time zone


Complete the following steps to set the clustered system time zone and time:
1. Enter the showtimezone command to determine for which time zone your system is
configured, as shown in Example 9-93.

Example 9-93 showtimezone command


IBM_2145:ITSO_SVC2:superuser>showtimezone
id timezone
522 UTC

2. To find the time zone code that is associated with your time zone, enter the lstimezones
command, as shown in Example 9-94. A truncated list is provided for this example. If this
setting is correct (for example, 522 UTC), go to Step 4. If the setting is incorrect, continue
with Step 3.

Example 9-94 lstimezones command


IBM_2145:ITSO_SVC2:superuser>lstimezones
id timezone
.
.
507 Turkey
508 UCT
509 Universal
510 US/Alaska
511 US/Aleutian
512 US/Arizona
513 US/Central
514 US/Eastern
515 US/East-Indiana
516 US/Hawaii
517 US/Indiana-Starke
518 US/Michigan
519 US/Mountain
520 US/Pacific
521 US/Samoa
522 UTC
.
.

3. Set the time zone by running the settimezone command, as shown in Example 9-95.

Example 9-95 settimezone command


IBM_2145:ITSO_SVC2:superuser>settimezone -timezone 520

Chapter 9. SAN Volume Controller operations using the command-line interface 555
4. Set the system time by running the setclustertime command, as shown in Example 9-96.

Example 9-96 setclustertime command


IBM_2145:ITSO_SVC2:superuser>setclustertime -time 061718402008

The format of the time is MMDDHHmmYYYY.

The clustered system time zone and time are now set.

9.9.7 Starting statistics collection


Statistics are collected at the end of each sampling period (as specified by the -interval
parameter). These statistics are written to a file. A file is created at the end of each sampling
period. Separate files are created for MDisks, volumes, and node statistics.

Use the startstats command to start the collection of statistics, as shown in Example 9-97.

Example 9-97 startstats command


IBM_2145:ITSO_SVC2:superuser>startstats -interval 15

Specify the interval (1 - 60) in minutes. This command starts statistics collection and gathers
data at 15-minute intervals.

Statistics collection: To verify that the statistics collection is set, display the system
properties again, as shown in Example 9-98.

Example 9-98 Statistics collection status and frequency


IBM_2145:ITSO_SVC2:superuser>lssystem
statistics_status on
statistics_frequency 15
-- The output has been shortened for easier reading. --

SVC 6.3: Starting with SVC 6.3, the command svctask stopstats was removed. You
cannot disable the statistics collection.

The statistics collection is now started on the clustered system.

9.9.8 Determining the status of a copy operation


Use the lscopystatus command to determine whether a file copy operation is in progress,
as shown in Example 9-99. Only one file copy operation can be performed at a time. The
output of this command is a status of either active or inactive.

Example 9-99 lscopystatus command


IBM_2145:ITSO_SVC2:superuser>lscopystatus
status
inactive

556 Implementing the IBM System Storage SAN Volume Controller V7.4
9.9.9 Shutting down a clustered system
If all input power to an SVC system is to be removed for more than a few minutes (for
example, if the machine room power is to be shut down for maintenance), it is important to
shut down the clustered system before you remove the power. If the input power is removed
from the uninterruptible power supply units without first shutting down the system and the
uninterruptible power supply units, the uninterruptible power supply units remain operational
and eventually are drained of power.

When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SVC does not permit any I/O activity to be performed to the volumes until the
uninterruptible power supply units are charged enough to enable all of the data on the SVC
nodes to be destaged in a subsequent unexpected power loss. Recharging the uninterruptible
power supply can take up to two hours.

Shutting down the clustered system before input power is removed to the uninterruptible
power supply units prevents the battery power from being drained. It also makes it possible for
I/O activity to be resumed when input power is restored.

Complete the following steps to shut down the system:


1. Use the stopsystem command to shut down your SVC system, as shown in
Example 9-100.

Example 9-100 stopsystem command


IBM_2145:ITSO_SVC2:superuser>stopsystem
Are you sure that you want to continue with the shut down?

This command shuts down the SVC clustered system. All data is flushed to disk before the
power is removed. You lose administrative contact with your system and the PuTTY
application automatically closes.
2. You are presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
Ensure that you stopped all FlashCopy mappings, Metro Mirror (remote copy)
relationships, data migration operations, and forced deletions before you continue. Enter y
in response to this message to run the command. No feedback is displayed. Entering
anything other than y or Y results in the command not running. No feedback is displayed.

Important: Before a clustered system is shut down, ensure that all I/O operations are
stopped that are destined for this system because you lose all access to all volumes
that are provided by this system. Failure to do so can result in failed I/O operations
being reported to the host operating systems.

Begin the process of quiescing all I/O to the system by stopping the applications on the
hosts that are using the volumes that are provided by the clustered system.

We completed the tasks that are required to shut down the system. To shut down the
uninterruptible power supply units, press the power-on button on the front panel of each
uninterruptible power supply unit.

Chapter 9. SAN Volume Controller operations using the command-line interface 557
Restarting the system: To restart the clustered system, you must first restart the
uninterruptible power supply units by pressing the power button on their front panels. Then,
press the power-on button on the service panel of one of the nodes within the system. After
the node is fully booted (for example, displaying Cluster: on line 1 and the cluster name
on line 2 of the panel), you can start the other nodes in the same way.

As soon as all of the nodes are fully booted, you can reestablish administrative contact by
using PuTTY, and your system is fully operational again.

9.10 Nodes
In this section, we describe the tasks that can be performed at an individual node level.

9.10.1 Viewing node details


Use the lsnode command to view the summary information about the nodes that are defined
within the SVC environment. To view more details about a specific node, append the node
name (for example, SVC2N1) to the command.

Example 9-101 shows both of these commands.

Tip: The -delim parameter truncates the content in the window and separates data fields
with colons (:) as opposed to wrapping text over multiple lines.

Example 9-101 lsnode command


IBM_2145:ITSO_SVC2:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h
ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number
1,SVC2N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.SVC2N1,,108283,,,
2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n2,,110711,,,
3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n4,,110775,,,
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n3,,104643,,,
IBM_2145:ITSO_SVC2:superuser>lsnode SVC2N1
id 1
name SVC2N1
UPS_serial_number 1000739004
WWNN 50050768010027E2
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 2
partner_node_name SVC1N2
config_node no
UPS_unique_id 10000000000027E2
port_id 50050768014027E2
port_status active
port_speed 2Gb
port_id 50050768013027E2
port_status active

558 Implementing the IBM System Storage SAN Volume Controller V7.4
port_speed 2Gb
port_id 50050768011027E2
port_status active
port_speed 2Gb
port_id 50050768012027E2
port_status active
port_speed 2Gb
hardware 8G4
iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.SVC2N1
iscsi_alias
failover_active no
failover_name SVC1N2
failover_iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.svc1n2
failover_iscsi_alias
panel_name 108283
enclosure_id
canister_id
enclosure_serial_number
service_IP_address 10.18.228.101
service_gateway 10.18.228.1
service_subnet_mask 255.255.255.0
service_IP_address_6
service_gateway_6
service_prefix_6

9.10.2 Adding a node


After a clustered system is created by using the service panel (the front panel of one of the
SVC nodes) and the system web interface, only one node (the configuration node) is set up.

To have a fully functional SVC system, you must add a second node to the configuration. To
add a node to a clustered system, complete the following steps to gather the necessary
information:
1. Before you can add a node, you must know which unconfigured nodes are available as
candidates. Issue the lsnodecandidate command, as shown in Example 9-102.

Example 9-102 lsnodecandidate command


IBM_2145:ITSO_SVC2:superuser>lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id hardware
50050768010037E5 104643 1000739007 10000000000037E5
8G4

2. You must specify to which I/O Group you are adding the node. If you enter the lsnode
command, you can identify the I/O Group ID of the group to which you are adding your
node, as shown in Example 9-103.

Tip: The node that you want to add must have a separate uninterruptible power supply
unit serial number from the uninterruptible power supply unit on the first node.

Example 9-103 lsnode command

IBM_2145:ITSO_SVC2:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS
_unique_id,hardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,
enclosure_serial_number

Chapter 9. SAN Volume Controller operations using the command-line interface 559
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,i
qn.1986-03.com.ibm:2145.itsosvc1.svc1n3,,104643,,,

3. Now that you know the available nodes, use the addnode command to add the node to the
SVC clustered system configuration, as shown in Example 9-104.

Example 9-104 addnode -wwnodename command


IBM_2145:ITSO_SVC2:superuser>addnode -wwnodename 50050768010037E5 -iogrp
io_grp1
Node, id [5], successfully added

This command adds the candidate node with the wwnodename of 50050768010037E5 to
the I/O Group called io_grp1.
The -wwnodename parameter (50050768010037E5) was used. However, you can also use the
-panelname parameter (104643) instead, as shown in Example 9-105. If you are standing in
front of the node, it is easier to read the panel name than it is to get the worldwide node
name (WWNN).

Example 9-105 addnode -panelname command


IBM_2145:ITSO_SVC2:superuser>addnode -panelname 104643 -name SVC1N3 -iogrp
io_grp1

The optional -name parameter (SVC1N3) also was used. If you do not provide the -name
parameter, the SVC automatically generates the name nodex (where x is the ID sequence
number that is assigned internally by the SVC).

Name: If you want to provide a name, you can use letters A - Z and a - z, numbers 0 - 9,
the dash (-), and the underscore (_). The name can be 1 - 63 characters. However, the
name cannot start with a number, dash, or the word “node” because this prefix is
reserved for SVC assignment only.

4. If the addnode command returns no information, your second node is powered on, the
zones are correctly defined, and the preexisting system configuration data can be stored
in the node. If you are sure that this node is not part of another active SVC system, you
can use the service panel to delete the existing system information. After this action is
complete, reissue the lsnodecandidate command and you see that the node is listed.

9.10.3 Renaming a node


Use the chnode command to rename a node within the SVC system configuration, as shown
in Example 9-106.

Example 9-106 chnode -name command


IBM_2145:ITSO_SVC2:superuser>chnode -name ITSO_SVC2_SVC1N3 4

This command renames node ID 4 to ITSO_SVC2_SVC1N3 4.

Name: The chnode command specifies the new name first. You can use letters A - Z and
a - z, numbers 0 - 9, the dash (-), and the underscore (_). The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or the word “node”
because this prefix is reserved for SVC assignment only.

560 Implementing the IBM System Storage SAN Volume Controller V7.4
9.10.4 Deleting a node
Use the rmnode command to remove a node from the SVC clustered system configuration, as
shown in Example 9-107.

Example 9-107 rmnode command


IBM_2145:ITSO_SVC2:superuser>rmnode SVC1N2

This command removes SVC1N2 from the SVC clustered system.

Because SVC1N2 also was the configuration node, the SVC transfers the configuration node
responsibilities to a surviving node within the I/O Group. Unfortunately, the PuTTY session
cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses
communication and closes automatically.

We must restart the PuTTY application to establish a secure session with the new
configuration node.

Important: If this node is the last node in an I/O Group and volumes are still assigned to
the I/O Group, the node is not deleted from the clustered system.

If this node is the last node in the system and the I/O Group has no remaining volumes, the
clustered system is destroyed and all virtualization information is lost. Any data that is still
required must be backed up or migrated before the system is destroyed.

9.10.5 Shutting down a node


Shutting down a single node within the clustered system might be necessary to perform
tasks, such as scheduled maintenance, while the SVC environment is left up and running.

Use the stopcluster -node command to shut down a single node, as shown in
Example 9-108.

Example 9-108 stopcluster -node command


IBM_2145:ITSO_SVC2:superuser>stopcluster -node SVC1N3
Are you sure that you want to continue with the shut down?

This command shuts down node SVC1N3 in a graceful manner. When this node is shut down,
the other node in the I/O Group destages the contents of its cache and enters write-through
mode until the node is powered up and rejoins the clustered system.

Important: You do not need to stop FlashCopy mappings, remote copy relationships, and
data migration operations. The other node handles these activities, but be aware that the
system has a single point of failure now.

If this node is the last node in an I/O Group, all access to the volumes in the I/O Group is lost.
Verify that you want to shut down this node before this command is run. You must specify the
-force flag.

By reissuing the lsnode command (as shown in Example 9-109 on page 562), we can see
that the node is now offline.

Chapter 9. SAN Volume Controller operations using the command-line interface 561
Example 9-109 lsnode command
IBM_2145:ITSO_SVC2:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h
ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number
1,SVC2N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.SVC2N1,,108283,,,
2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n2,,110711,,,
3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n4,,110775,,,
4,SVC1N3,1000739007,50050768010037E5,offline,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n3,,104643,,,
IBM_2145:ITSO_SVC2:superuser>lsnode SVC1N3
CMMVC5782E The object specified is offline.

Restart: To restart the node manually, press the power-on button that is on the service
panel of the node.

We completed the tasks that are required to view, add, delete, rename, and shut down a node
within an SVC environment.

9.11 I/O Groups


In this section, we describe the tasks that you can perform at an I/O Group level.

9.11.1 Viewing I/O Group details


Use the lsiogrp command to view information about the I/O Groups that are defined within
the SVC environment, as shown in Example 9-110.

Example 9-110 I/O Group details


IBM_2145:ITSO_SVC2:superuser>lsiogrp
id name node_count vdisk_count host_count
0 io_grp0 2 24 9
1 io_grp1 2 22 9
2 io_grp2 0 0 1
3 io_grp3 0 0 1
4 recovery_io_grp 0 0 0

In our example, the SVC predefines five I/O Groups. In a four-node clustered system (similar
to our example), only two I/O Groups are in use. The other I/O Groups (io_grp2 and io_grp3)
are for a six-node or eight-node clustered system.

562 Implementing the IBM System Storage SAN Volume Controller V7.4
The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group
that normally owns them experience multiple failures. By using this design, the volumes can
be moved to the recovery I/O Group and then into a working I/O Group. While temporarily
assigned to the recovery I/O Group, I/O access is not possible.

9.11.2 Renaming an I/O Group


Use the chiogrp command to rename an I/O Group, as shown in Example 9-111 on
page 563.

Example 9-111 chiogrp command


IBM_2145:ITSO_SVC2:superuser>chiogrp -name io_grpA io_grp1

This command renames the I/O Group io_grp1 to io_grpA.

Name: The chiogrp command specifies the new name first.


If you want to provide a name, you can use letters A - Z, letters a - z, numbers 0 - 9, the
dash (-), and the underscore (_). The name can be 1 - 63 characters. However, the name
cannot start with a number, dash, or the word “iogrp” because this prefix is reserved for
SVC assignment only.

To see whether the renaming was successful, run the lsiogrp command again to see the
change.

We completed the required tasks to rename an I/O Group.

9.11.3 Adding and removing hostiogrp


To map or unmap a specific host object to a specific I/O Group to reach the maximum number
of hosts that is supported by an SVC clustered system, use the addhostiogrp command to
map a specific host to a specific I/O Group, as shown in Example 9-112.

Example 9-112 addhostiogrp command


IBM_2145:ITSO_SVC2:superuser>addhostiogrp -iogrp 1 Kanaga

The addhostiogrp command uses the following parameters:


򐂰 -iogrp iogrp_list -iogrpall
Specify a list of one or more I/O Groups that must be mapped to the host. This parameter
is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all
the I/O Groups must be mapped to the specified host. This parameter is mutually
exclusive with -iogrp.
򐂰 -host host_id_or_name
Identify the host by ID or name to which the I/O Groups must be mapped.

Use the rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in
Example 9-113.

Example 9-113 rmhostiogrp command


IBM_2145:ITSO_SVC2:superuser>rmhostiogrp -iogrp 0 Kanaga

Chapter 9. SAN Volume Controller operations using the command-line interface 563
The rmhostiogrp command uses the following parameters:
򐂰 -iogrp iogrp_list -iogrpall
Specify a list of one or more I/O Groups that must be unmapped to the host. This
parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies
that all of the I/O Groups must be unmapped to the specified host. This parameter is
mutually exclusive with -iogrp.
򐂰 -force
If the removal of a host to I/O Group mapping results in the loss of the volume to host
mappings, the command fails if the -force flag is not used. However, the -force flag
overrides this behavior and forces the deletion of the host to I/O Group mapping.
򐂰 host_id_or_name
Identify the host by the ID or name to which the I/O Groups must be unmapped.

9.11.4 Listing I/O Groups


To list all of the I/O Groups that are mapped to the specified host and vice versa, use the
lshostiogrp command and specify the host name, as in our example, Kanaga, as shown in
Example 9-114.

Example 9-114 lshostiogrp command


IBM_2145:ITSO_SVC2:superuser>lshostiogrp Kanaga
id name
1 io_grp1

To list all of the host objects that are mapped to the specified I/O Group, use the lsiogrphost
command, as shown in Example 9-115.

Example 9-115 lsiogrphost command


IBM_2145:ITSO_SVC2:superuser> lsiogrphost io_grp1
id name
1 Nile
2 Kanaga
3 Siam

In Example 9-115, io_grp1 is the I/O Group name.

9.12 Managing authentication


In the following sections, we describe authentication administration.

9.12.1 Managing users by using the CLI


In this section, we describe how to operate and manage authentication by using the CLI. All
users must now be a member of a predefined user group. You can list those groups by using
the lsusergrp command, as shown in Example 9-116.

564 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-116 lsusergrp command
IBM_2145:ITSO_SVC2:superuser>lsusergrp
id name role remote
0 Securitysuperuser Securitysuperuser no
1 superuseristrator superuseristrator no
2 CopyOperator CopyOperator no
3 Service Service no
4 Monitor Monitor no

Example 9-117 shows a simple example of creating a user. User John is added to the user
group Monitor with the password m0nitor.

Example 9-117 mkuser creates a user called John with password m0nitor
IBM_2145:ITSO_SVC2:superuser>mkuser -name John -usergrp Monitor -password m0nitor
User, id [6], successfully created

Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server.

The user groups include a defined authority role, as listed in Table 9-2.

Table 9-2 Authority roles


User group Role User

Security superuser All commands Superusers

Administrator All commands except: Administrator who controls the


򐂰 chauthservice SVC
򐂰 mkuser
򐂰 rmuser
򐂰 chuser
򐂰 mkusergrp
򐂰 rmusergrp
򐂰 chusergrp
򐂰 setpwdreset

Copy operator All display commands and the Controls all of the copy
following commands: functionality of the cluster
򐂰 prestartfcconsistgrp
򐂰 startfcconsistgrp
򐂰 stopfcconsistgrp
򐂰 chfcconsistgrp
򐂰 prestartfcmap
򐂰 startfcmap
򐂰 stopfcmap
򐂰 chfcmap
򐂰 startrcconsistgrp
򐂰 stoprcconsistgrp
򐂰 switchrcconsistgrp
򐂰 chrcconsistgrp
򐂰 startrcrelationship
򐂰 stoprcrelationship
򐂰 switchrcrelationship
򐂰 chrcrelationship
򐂰 chpartnership

Chapter 9. SAN Volume Controller operations using the command-line interface 565
User group Role User

Service All display commands Performs service maintenance


and the following commands: and other hardware tasks on
򐂰 applysoftware the system
򐂰 setlocale
򐂰 addnode
򐂰 rmnode
򐂰 cherrstate
򐂰 writesernum
򐂰 detectmdisk
򐂰 includemdisk
򐂰 clearerrlog
򐂰 cleardumps
򐂰 settimezone
򐂰 stopcluster
򐂰 startstats
򐂰 stopstats
򐂰 settime

Monitor All display commands and the Need view access only
following commands:
򐂰 finderr
򐂰 dumperrlog
򐂰 dumpinternallog
򐂰 chcurrentuser
򐂰 svcconfig: backup

9.12.2 Managing user roles and groups


Role-based security commands are used to restrict the administrative abilities of a user. We
cannot create user roles, but we can create user groups and assign a predefined role to our
group.

As of SVC 6.3, you can connect to the clustered system by using the same user name with
which you log in to an SVC GUI.

To view the user roles on your system, use the lsusergrp command, as shown in
Example 9-118.

Example 9-118 lsusergrp command


IBM_2145:ITSO_SVC2:superuser>lsusergrp
id name role remote
0 Securitysuperuser Securitysuperuser no
1 superuseristrator superuseristrator no
2 CopyOperator CopyOperator no
3 Service Service no
4 Monitor Monitor no

To view the defined users and the user groups to which they belong, use the lsuser
command, as shown in Example 9-119.

Example 9-119 lsuser command


IBM_2145:ITSO_SVC2:superuser>lsuser -delim ,
id,name,password,ssh_key,remote,usergrp_id,usergrp_name
0,superuser,yes,no,no,0,Securitysuperuser

566 Implementing the IBM System Storage SAN Volume Controller V7.4
1,superuser,yes,yes,no,0,Securitysuperuser
2,Torben,yes,no,no,0,Securitysuperuser
3,Massimo,yes,no,no,1,superuseristrator
4,Christian,yes,no,no,1,superuseristrator
5,Alejandro,yes,no,no,1,superuseristrator
6,John,yes,no,no,4,Monitor

9.12.3 Changing a user


To change user passwords, run the chuser command.

By using the chuser command, you can modify a user. You can rename a user, assign a new
password (if you are logged on with administrative privileges), and move a user from one user
group to another user group. However, be aware that a member can be a member of only one
group at a time.

9.12.4 Audit log command


The audit log is helpful to show the commands that were entered on a system. Most action
commands that are issued by the old or new CLI are recorded in the audit log.

The native GUI performs actions by using the CLI programs.

The SVC console performs actions by issuing Common Information Model (CIM) commands
to the CIM object manager (CIMOM), which then runs the CLI programs.

Actions that are performed by using the native GUI and the SVC Console are recorded in the
audit log.

The following commands are not audited:


򐂰 dumpconfig
򐂰 cpdumps
򐂰 cleardumps
򐂰 finderr
򐂰 dumperrlog
򐂰 dumpinternallog
򐂰 svcservicetask dumperrlog
򐂰 svcservicetask finderror

The audit log contains approximately 1 MB of data, which can contain about 6,000
average-length commands. When this log is full, the system copies it to a new file in the
/dumps/audit directory on the configuration node and resets the in-memory audit log.

To display entries from the audit log, use the catauditlog -first 5 command to return a list
of five in-memory audit log entries, as shown in Example 9-120.

Example 9-120 catauditlog command


IBM_2145:ITSO_SVC2:superuser>catauditlog -first 5
audit_seq_no timestamp cluster_user ssh_ip_address result res_obj_id action_cmd
459 110928150506 superuser 10.18.228.173 0 6 svctask mkuser
-name John -usergrp Monitor -password '######'
460 110928160353 superuser 10.18.228.173 0 7 svctask
mkmdiskgrp -name DS5000-2 -ext 256
461 110928160535 superuser 10.18.228.173 0 1 svctask mkhost
-name hostone -hbawwpn 210100E08B251DD4 -force -mask 1001

Chapter 9. SAN Volume Controller operations using the command-line interface 567
462 110928160755 superuser 10.18.228.173 0 1 svctask mkvdisk
-iogrp 0 -mdiskgrp 3 -size 10 -unit gb -vtype striped -autoexpand -grainsize 32 -rsize 20%
463 110928160817 superuser 10.18.228.173 0 svctask rmvdisk
1

If you must dump the contents of the in-memory audit log to a file on the current configuration
node, use the dumpauditlog command. This command does not provide any feedback; it
provides the prompt only. To obtain a list of the audit log dumps, use the lsdumps command,
as shown in Example 9-121.

Example 9-121 lsdumps command


IBM_2145:ITSO_SVC2:superuser>lsdumps
id filename
0 dump.110711.110914.182844
1 svc.config.cron.bak_108283
2 sel.110711.trc
3 endd.trc
4 rtc.race_mq_log.txt.110711.trc
5 dump.110711.110920.102530
6 ethernet.110711.trc
7 svc.config.cron.bak_110711
8 svc.config.cron.xml_110711
9 svc.config.cron.log_110711
10 svc.config.cron.sh_110711
11 110711.trc

9.13 Managing Copy Services


In the following sections, we describe how to manage Copy Services.

9.13.1 FlashCopy operations


In this section, we use a scenario to show how to use commands with PuTTY to perform
FlashCopy. For information about other commands, see the IBM System Storage Open
Software Family SAN Volume Controller: Command-Line Interface User’s Guide,
GC27-2287.

Scenario description
We use the scenario that is described in this section in both the CLI section and the GUI
section. In this scenario, we want to FlashCopy the following volumes:
򐂰 DB_Source: Database files
򐂰 Log_Source: Database log files
򐂰 App_Source: Application files

We create Consistency Groups to handle the FlashCopy of DB_Source and Log_Source


because data integrity must be kept on DB_Source and Log_Source.

In our scenario, the application files are independent of the database; therefore, we create a
single FlashCopy mapping for App_Source. We make two FlashCopy targets for DB_Source
and Log_Source and, therefore, two Consistency Groups. The scenario is shown in Figure 9-6
on page 569.

568 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 9-6 FlashCopy scenario

9.13.2 Setting up FlashCopy


We created the source and target volumes. The following source and target volumes are
identical in size, which is a requirement of the FlashCopy function:
򐂰 DB_Source, DB_Target1, and DB_Target2
򐂰 Log_Source, Log_Target1, and Log_Target2
򐂰 App_Source and App_Target1

Complete the following steps to set up the FlashCopy:


1. Create the following FlashCopy Consistency Groups:
– FCCG1
– FCCG2
2. Create the following FlashCopy mappings for source volumes:
– DB_Source FlashCopy to DB_Target1; the mapping name is DB_Map1.
– DB_Source FlashCopy to DB_Target2; the mapping name is DB_Map2.
– Log_Source FlashCopy to Log_Target1; the mapping name is Log_Map1.
– Log_Source FlashCopy to Log_Target2; the mapping name is Log_Map2.
– App_Source FlashCopy to App_Target1; the mapping name is App_Map1.
– Copyrate 50

Chapter 9. SAN Volume Controller operations using the command-line interface 569
9.13.3 Creating a FlashCopy Consistency Group
Use the command mkfcconsistgrp to create a new FlashCopy Consistency Group. The ID of
the new group is returned. If you created several FlashCopy mappings for a group of volumes
that contain elements of data for the same application, it might be convenient to assign these
mappings to a single FlashCopy Consistency Group. Then, you can issue a single prepare or
start command for the whole group so that, for example, all files for a particular database are
copied at the same time.

In Example 9-122, the FCCG1 and FCCG2 Consistency Groups are created to hold the
FlashCopy maps of DB and Log. This step is important for FlashCopy on database
applications because it helps to maintain data integrity during FlashCopy.

Example 9-122 Creating two FlashCopy Consistency Groups


IBM_2145:ITSO_SVC3:superuser>mkfcconsistgrp -name FCCG1
FlashCopy Consistency Group, id [1], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcconsistgrp -name FCCG2


FlashCopy Consistency Group, id [2], successfully created

In Example 9-123, we checked the status of the Consistency Groups. Each Consistency
Group has a status of empty.

Example 9-123 Checking the status


IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 empty
2 FCCG2 empty

If you want to change the name of a Consistency Group, you can use the chfcconsistgrp
command. Type chfcconsistgrp -h for help with this command.

9.13.4 Creating a FlashCopy mapping


To create a FlashCopy mapping, use the mkfcmap command. This command creates a
FlashCopy mapping that maps a source volume to a target volume to prepare for subsequent
copying.

When this command is run, a FlashCopy mapping logical object is created. This mapping
persists until it is deleted. The mapping specifies the source and destination volumes. The
destination must be identical in size to the source or the mapping fails. Issue the lsvdisk
-bytes command to find the exact size of the source volume for which you want to create a
target disk of the same size.

In a single mapping, source and destination cannot be on the same volume. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a Consistency Group. These groups of mappings can be triggered at
the same time, which enables multiple volumes to be copied at the same time and creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files are on separate disks.

If no Consistency Group is defined, the mapping is assigned to the default group 0, which is a
special group that cannot be started as a whole. Mappings in this group can be started only
on an individual basis.

570 Implementing the IBM System Storage SAN Volume Controller V7.4
The background copy rate specifies the priority that must be given to completing the copy. If 0
is specified, the copy does not proceed in the background. The default is 50.

Tip: You can use a parameter to delete FlashCopy mappings automatically after the
background copy is completed (when the mapping gets to the idle_or_copied state). Use
the following command:
mkfcmap -autodelete

This command does not delete mappings in cascade with dependent mappings because it
cannot get to the idle_or_copied state in this situation.

Example 9-124 shows the creation of the first FlashCopy mapping for DB_Source, Log_Source,
and App_Source.

Example 9-124 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO_SVC3:superuser>mkfcmap -source DB_Source -target DB_Target1 -name
DB_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [0], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Log_Source -target Log_Target1 -name


Log_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [1], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source App_Source -target App_Target1 -name


App_Map1
FlashCopy Mapping, id [2], successfully created

Example 9-125 shows the command to create a second FlashCopy mapping for volume
DB_Source and volume Log_Source.

Example 9-125 Create more FlashCopy mappings


IBM_2145:ITSO_SVC3:superuser>mkfcmap -source DB_Source -target DB_Target2 -name
DB_Map2 -consistgrp FCCG2
FlashCopy Mapping, id [3], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Log_Source -target Log_Target2 -name


Log_Map2 -consistgrp FCCG2
FlashCopy Mapping, id [4], successfully created

Example 9-126 shows the result of these FlashCopy mappings. The status of the mapping is
idle_or_copied.

Example 9-126 Check the result of Multiple Target FlashCopy mappings


IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name
group_id group_name status progress copy_rate clean_progress incremental
partner_FC_id partner_FC_name restoring start_time rc_controlled
0 DB_Map1 3 DB_Source 4 DB_Target1 1
FCCG1 idle_or_copied 0 50 100 off
no no
1 Log_Map1 6 Log_Source 7 Log_Target1 1
FCCG1 idle_or_copied 0 50 100 off
no no

Chapter 9. SAN Volume Controller operations using the command-line interface 571
2 App_Map1 9 App_Source 10 App_Target1
idle_or_copied 0 50 100 off
no no
3 DB_Map2 3 DB_Source 5 DB_Target2 2
FCCG2 idle_or_copied 0 50 100 off
no no
4 Log_Map2 6 Log_Source 8 Log_Target2 2
FCCG2 idle_or_copied 0 50 100 off
no no
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied

If you want to change the FlashCopy mapping, you can use the chfcmap command. Enter
chfcmap -h to get help with this command.

9.13.5 Preparing (pre-triggering) the FlashCopy mapping


Although the mappings were created, the cache still accepts data for the source volumes. You
can trigger the mapping only when the cache does not contain any data for FlashCopy source
volumes. You must issue a prestartfcmap command to prepare a FlashCopy mapping to
start. This command tells the SVC to flush the cache of any content for the source volume
and to pass through any further write data for this volume.

When the prestartfcmap command is run, the mapping enters the Preparing state. After the
preparation is complete, it changes to the Prepared state. At this point, the mapping is ready
for triggering. Preparing and the subsequent triggering are performed on a Consistency
Group basis.

Only mappings that belong to Consistency Group 0 can be prepared on their own because
Consistency Group 0 is a special group that contains the FlashCopy mappings that do not
belong to any Consistency Group. A FlashCopy must be prepared before it can be triggered.

In our scenario, App_Map1 is not in a Consistency Group. In Example 9-127, we show how to
start the preparation for App_Map1.

Example 9-127 Prepare a FlashCopy without a Consistency Group


IBM_2145:ITSO_SVC3:superuser>prestartfcmap App_Map1

IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status prepared
progress 0
copy_rate 50
start_time
dependent_mappings 0
autodelete off
clean_progress 0

572 Implementing the IBM System Storage SAN Volume Controller V7.4
clean_rate 50
incremental off
difference 0
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

Another option is to add the -prep parameter to the startfcmap command, which prepares
the mapping and then starts the FlashCopy.

In Example 9-127 on page 572, we also show how to check the status of the current
FlashCopy mapping. The status of App_Map1 is prepared.

9.13.6 Preparing (pre-triggering) the FlashCopy Consistency Group


Use the prestartfcconsistgrp command to prepare a FlashCopy Consistency Group. As
described 9.13.5, “Preparing (pre-triggering) the FlashCopy mapping” on page 572, this
command flushes the cache of any data that is destined for the source volume and forces the
cache into the write-through mode until the mapping is started. The difference is that this
command prepares a group of mappings (at a Consistency Group level) instead of one
mapping.

When you assign several mappings to a FlashCopy Consistency Group, you must issue only
a single prepare command for the whole group to prepare all of the mappings at one time.

Example 9-128 shows how we prepare the Consistency Groups for DB and Log and check the
result. After the command runs all of the FlashCopy maps that we have, all of the maps and
Consistency Groups are in the prepared status. Now, we are ready to start the FlashCopy.

Example 9-128 Prepare FlashCopy Consistency Groups


IBM_2145:ITSO_SVC3:superuser>prestartfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:superuser>prestartfcconsistgrp FCCG2

IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp FCCG1
id 1
name FCCG1
status prepared
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1

IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 prepared
2 FCCG2 prepared

Chapter 9. SAN Volume Controller operations using the command-line interface 573
9.13.7 Starting (triggering) FlashCopy mappings
The startfcmap command is used to start a single FlashCopy mapping. When a single
FlashCopy mapping is started, a point-in-time copy of the source volume is created on the
target volume.

When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0 (NOCOPY), only data that is then updated on the source is copied to the destination. We
suggest that you use this scenario as a backup copy while the mapping exists in the Copying
state. If the copy is stopped, the destination is unusable.

If you want a duplicate copy of the source at the destination, set the background copy rate
greater than 0. By setting this rate, the system copies all of the data (even unchanged data) to
the destination and eventually reaches the idle_or_copied state. After this data is copied, you
can delete the mapping and have a usable point-in-time copy of the source at the destination.

In Example 9-129, App_Map1 changes to the copying status after the FlashCopy is started.

Example 9-129 Start App_Map1


IBM_2145:ITSO_SVC3:superuser>startfcmap App_Map1
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 DB_Map1 3 DB_Source 4 DB_Target1 1
FCCG1 prepared 0 50 0 off
no no
1 Log_Map1 6 Log_Source 7 Log_Target1 1
FCCG1 prepared 0 50 0 off
no no
2 App_Map1 9 App_Source 10 App_Target1
copying 0 50 100 off no
110929113407 no
3 DB_Map2 3 DB_Source 5 DB_Target2 2
FCCG2 prepared 0 50 0 off
no no
4 Log_Map2 6 Log_Source 8 Log_Target2 2
FCCG2 prepared 0 50 0 off
no no
IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status copying
progress 0
copy_rate 50
start_time 110929113407
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off

574 Implementing the IBM System Storage SAN Volume Controller V7.4
difference 0
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

9.13.8 Starting (triggering) the FlashCopy Consistency Group


Run the startfcconsistgrp command and afterward the database can be resumed, as
shown in Example 9-130. We created two point-in-time consistent copies of the DB and Log
volumes. After this command is run, the Consistency Group and the FlashCopy maps are all
in the copying status.

Example 9-130 Start FlashCopy Consistency Group


IBM_2145:ITSO_SVC3:superuser>startfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:superuser>startfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp FCCG1
id 1
name FCCG1
status copying
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO_SVC3:superuser>
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 copying
2 FCCG2 copying

9.13.9 Monitoring the FlashCopy progress


To monitor the background copy progress of the FlashCopy mappings, run the
lsfcmapprogress command for each FlashCopy mapping.

Alternatively, you can also query the copy progress by using the lsfcmap command. As
shown in Example 9-131, DB_Map1 returns information that the background copy is 23%
completed and Log_Map1 returns information that the background copy is 41% completed.
DB_Map2 returns information that the background copy is 5% completed and Log_Map2 returns
information that the background copy is 4% completed.

Example 9-131 Monitoring the background copy progress


IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress DB_Map1
id progress
0 23
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress Log_Map1
id progress
1 41
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress Log_Map2

Chapter 9. SAN Volume Controller operations using the command-line interface 575
id progress
4 4
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress DB_Map2
id progress
3 5
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress App_Map1
id progress
2 10

When the background copy completes, the FlashCopy mapping enters the idle_or_copied
state. When all of the FlashCopy mappings in a Consistency Group enter this status, the
Consistency Group is at the idle_or_copied status.

When in this state, the FlashCopy mapping can be deleted and the target disk can be used
independently if, for example, another target disk is to be used for the next FlashCopy of the
particular source volume.

9.13.10 Stopping the FlashCopy mapping


The stopfcmap command is used to stop a FlashCopy mapping. By using this command, you
can stop an active mapping (copying) or suspended mapping. When this command is run, it
stops a single FlashCopy mapping.

Tip: If you want to stop a mapping or group in a Multiple Target FlashCopy environment,
consider whether you want to keep any of the dependent mappings. If you do not want to
keep these mappings, run the stop command with the -force parameter. This command
stops all of the dependent maps and negates the need for the stopping copy process to
run.

When a FlashCopy mapping is stopped, the target volume becomes invalid. The target
volume is set offline by the SVC. The FlashCopy mapping must be prepared again or
retriggered to bring the target volume online again.

Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by the SVC if the mapping
is in the copying state and progress=100.

Example 9-132 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 changed
to idle_or_copied.

Example 9-132 Stop App_Map1 FlashCopy


IBM_2145:ITSO_SVC3:superuser>stopfcmap App_Map1

IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status idle_or_copied

576 Implementing the IBM System Storage SAN Volume Controller V7.4
progress 100
copy_rate 50
start_time 110929113407
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

9.13.11 Stopping the FlashCopy Consistency Group


The stopfcconsistgrp command is used to stop any active FlashCopy Consistency Group. It
stops all mappings in a Consistency Group. When a FlashCopy Consistency Group is
stopped for all mappings that are not 100% copied, the target volumes become invalid and
are set offline by the SVC. The FlashCopy Consistency Group must be prepared again and
restarted to bring the target volumes online again.

Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use or when you want to modify the FlashCopy Consistency Group. When a Consistency
Group is stopped, the target volume might become invalid and be set offline by the SVC,
depending on the state of the mapping.

As shown in Example 9-133, we stop the FCCG1 and FCCG2 Consistency Groups. The status of
the two Consistency Groups changed to stopped. Most of the FlashCopy mapping
relationships now have the status of stopped. As you can see, several of them completed the
copy operation and are now in a status of idle_or_copied.

Example 9-133 Stop FCCG1 and FCCG2 Consistency Groups


IBM_2145:ITSO_SVC3:superuser>stopfcconsistgrp FCCG1

IBM_2145:ITSO_SVC3:superuser>stopfcconsistgrp FCCG2

IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied

IBM_2145:ITSO_SVC3:superuser>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_
name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,res
toring,start_time,rc_controlled
0,DB_Map1,3,DB_Source,4,DB_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,110929113806,
no
1,Log_Map1,6,Log_Source,7,Log_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
2,App_Map1,9,App_Source,10,App_Target1,,,idle_or_copied,100,50,100,off,,,no,110929113407,no
3,DB_Map2,3,DB_Source,5,DB_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,110929113806,
no

Chapter 9. SAN Volume Controller operations using the command-line interface 577
4,Log_Map2,6,Log_Source,8,Log_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1109291138
06,no

9.13.12 Deleting the FlashCopy mapping


To delete a FlashCopy mapping, use the rmfcmap command. When the command is run, it
attempts to delete the specified FlashCopy mapping. If the FlashCopy mapping is stopped,
the command fails unless the -force flag is specified. If the mapping is active (copying), it
must first be stopped before it can be deleted.

Deleting a mapping deletes only the logical relationship between the two volumes. However,
when issued on an active FlashCopy mapping that uses the -force flag, the delete renders
the data on the FlashCopy mapping target volume as inconsistent.

Tip: If you want to use the target volume as a normal volume, monitor the background copy
progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.
Another option is to set the -autodelete option when the FlashCopy mapping is created.

As shown in Example 9-134, we delete App_Map1.

Example 9-134 Delete App_Map1


IBM_2145:ITSO_SVC3:superuser>rmfcmap App_Map1

9.13.13 Deleting the FlashCopy Consistency Group


The rmfcconsistgrp command is used to delete a FlashCopy Consistency Group. When run,
this command deletes the specified Consistency Group. If mappings are members of the
group, the command fails unless the -force flag is specified.

If you also want to delete all of the mappings in the Consistency Group, first delete the
mappings and then delete the Consistency Group.

As shown in Example 9-135, we delete all of the maps and Consistency Groups and then
check the result.

Example 9-135 Remove the mappings and the Consistency Groups


IBM_2145:ITSO_SVC3:superuser>rmfcmap DB_Map1
IBM_2145:ITSO_SVC3:superuser>rmfcmap DB_Map2
IBM_2145:ITSO_SVC3:superuser>rmfcmap Log_Map1
IBM_2145:ITSO_SVC3:superuser>rmfcmap Log_Map2
IBM_2145:ITSO_SVC3:superuser>rmfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:superuser>rmfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
IBM_2145:ITSO_SVC3:superuser>lsfcmap
IBM_2145:ITSO_SVC3:superuser>

578 Implementing the IBM System Storage SAN Volume Controller V7.4
9.13.14 Migrating a volume to a thin-provisioned volume
Complete the following steps to migrate a volume to a thin-provisioned volume:
1. Create a thin-provisioned, space-efficient target volume with the same size as the volume
that you want to migrate.
Example 9-136 shows the details of a volume with ID 11. It was created as a
thin-provisioned volume with the same size as the App_Source volume.

Example 9-136 lsvdisk 11 command


IBM_2145:ITSO_SVC3:superuser>lsvdisk 11
id 11
name App_Source_SE
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE00000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 221.17MB
free_capacity 220.77MB
overallocation 4629
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on

Chapter 9. SAN Volume Controller operations using the command-line interface 579
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 221.17MB

2. Define a FlashCopy mapping in which the non-thin-provisioned volume is the source and
the thin-provisioned volume is the target. Specify a copy rate as high as possible and
activate the -autodelete option for the mapping, as shown in Example 9-137.

Example 9-137 mkfcmap command

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source App_Source -target App_Source_SE -name


MigrtoThinProv -copyrate 100 -autodelete
FlashCopy Mapping, id [0], successfully created
IBM_2145:ITSO_SVC3:superuser>lsfcmap 0
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status idle_or_copied
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

3. Run the prestartfcmap command and the lsfcmap MigrtoThinProv command, as shown
in Example 9-138.

Example 9-138 prestartfcmap and lsfcmap commands

IBM_2145:ITSO_SVC3:superuser>prestartfcmap MigrtoThinProv
IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status prepared
progress 0
copy_rate 100

580 Implementing the IBM System Storage SAN Volume Controller V7.4
start_time
dependent_mappings 0
autodelete on
clean_progress 0
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

4. Run the startfcmap command, as shown in Example 9-139.

Example 9-139 startfcmap command


IBM_2145:ITSO_SVC3:superuser>startfcmap MigrtoThinProv

5. Monitor the copy process by using the lsfcmapprogress command, as shown in


Example 9-140.

Example 9-140 lsfcmapprogress command


IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress MigrtoThinProv
id progress
0 67

6. The FlashCopy mapping is deleted automatically, as shown in Example 9-141.

Example 9-141 lsfcmap command

IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status copying
progress 67
copy_rate 100
start_time 110929135848
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

Chapter 9. SAN Volume Controller operations using the command-line interface 581
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress MigrtoThinProv
CMMVC5804E The action failed because an object that was specified in the command does
not exist.
IBM_2145:ITSO_SVC3:superuser>

An independent copy of the source volume (App_Source) was created. The migration
completes, as shown in Example 9-142.

Example 9-142 lsvdisk App_Source

IBM_2145:ITSO_SVC3:superuser>lsvdisk App_Source
id 9
name App_Source
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE000000000000009
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on

582 Implementing the IBM System Storage SAN Volume Controller V7.4
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB

Real size: Independently of what you defined as the real size of the target thin-provisioned
volume, the real size is at least the capacity of the source volume.

To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same
scenario.

9.13.15 Reverse FlashCopy


You can also have a reverse FlashCopy mapping without having to remove the original
FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning.

In Example 9-143, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1 is a reverse
FlashCopy mapping. We also have a cascade FCMAP_2 where its source is FCMAP_1’s target
volume, and its target is a separate volume that is named Volume_FC_T1.

In our example, we started the FCMAP_1 and later FCMAP_2 after the environment was created.

As an example, we started FCMAP_rev_1 without specifying the -restore parameter to show


why we must use the -restore parameter, and to show the following message that is issued if
you do not use the -restore parameter:

CMMVC6298E The command failed because a target VDisk has dependent FlashCopy
mappings.

When a reverse FlashCopy mapping is started, you must use the -restore option to indicate
that you want to overwrite the data on the source disk of the forward mapping.

Example 9-143 Reverse FlashCopy


IBM_2145:ITSO_SVC3:superuser>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change
3 Volume_FC_S 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000003 0 1
empty 0 0 no
4 Volume_FC_T_S1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000004 0 1
empty 0 0 no
5 Volume_FC_T1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000005 0 1
empty 0 0 no

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Volume_FC_S -target Volume_FC_T_S1 -name


FCMAP_1 -copyrate 50
FlashCopy Mapping, id [0], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_S -name


FCMAP_rev_1 -copyrate 50
FlashCopy Mapping, id [1], successfully created

Chapter 9. SAN Volume Controller operations using the command-line interface 583
IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name
FCMAP_2 -copyrate 50
FlashCopy Mapping, id [2], successfully created

IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
idle_or_copied 0 50 100 off 1 FCMAP_rev_1
no no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
idle_or_copied 0 50 100 off
no no

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep FCMAP_1

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep FCMAP_2

IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 0
50 100 off 1 FCMAP_rev_1 no
no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 4 50 100 off
no 110929143739 no

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep FCMAP_rev_1


CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings.

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep -restore FCMAP_rev_1

IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
copying 43 100 56 off 1 FCMAP_rev_1 no
110929151911 no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
copying 56 100 43 off 0 FCMAP_1 yes
110929152030 no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 37 100 100 off no
110929151926 no

As you can see in Example 9-143 on page 583, FCMAP_rev_1 shows a restoring value of yes
while the FlashCopy mapping is copying. After it finishes copying, the restoring value field is
changed to no.

584 Implementing the IBM System Storage SAN Volume Controller V7.4
9.13.16 Split-stopping of FlashCopy maps
The stopfcmap command has a -split option. By using this option, the source target of a
map (which is 100% complete) can be removed from the head of a cascade when the map is
stopped.

For example, if we have four volumes in a cascade (A → B → C → D), and the map A → B is
100% complete, the use of the stopfcmap -split mapAB command results in mapAB becoming
idle_copied and the remaining cascade becoming B → C → D.

Without the -split option, volume A remains at the head of the cascade (A → C → D).
Consider the following sequence of steps:
1. The user takes a backup that uses the mapping A → B. A is the production volume and B
is a backup.
2. At a later point, the user experiences corruption on A and so reverses the mapping to
B → A.
3. The user then takes another backup from the production disk A, which results in the
cascade B → A → C.

Stopping A → B without the -split option results in the cascade B → C. The backup disk B is
now at the head of this cascade.

When the user next wants to take a backup to B, the user can still start mapping A → B (by
using the -restore flag), but the user cannot then reverse the mapping to A (B → A or C →
A).

Stopping A → B with the -split option results in the cascade A → C. This action does not
result in the same problem because the production disk A is at the head of the cascade
instead of the backup disk B.

9.14 Metro Mirror operation

Intercluster example: The example in this section is for intercluster operations only.

If you want to set up intracluster operations, we highlight the parts of the following
procedure that you do not need to perform.

In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC
system ITSO_SVC2 primary site and the SVC system ITSO_SVC4 at the secondary site.
Table 9-3 shows the details of the volumes.

Table 9-3 Volume details


Content of volume Volumes at primary site Volumes at secondary site

Database files MM_DB_Pri MM_DB_Sec

Database log files MM_DBLog_Pri MM_DBLog_Sec

Application files MM_App_Pri MM_App_Sec

Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a
CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them.

Chapter 9. SAN Volume Controller operations using the command-line interface 585
Because application files are independent of the database in this scenario, a stand-alone
Metro Mirror relationship is created for the MM_App_Pri volume. Figure 9-7 shows the Metro
Mirror setup.

Figure 9-7 Metro Mirror scenario

9.14.1 Setting up Metro Mirror


In this section, we assume that the source and target volumes were created and that the
inter-switch links (ISLs) and zoning are in place to enable the SVC clustered systems to
communicate.

Complete the following steps to set up the Metro Mirror:


1. Create an SVC partnership between ITSO_SVC2 and ITSO_SVC4 on both of the SVC
clustered systems.
2. Create a Metro Mirror Consistency Group that is named CG_W2K3_MM.
3. Create the Metro Mirror relationship for MM_DB_Pri with the following settings:
– Master: MM_DB_Pri
– Auxiliary: MM_DB_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: MMREL1
– Consistency Group: CG_W2K3_MM
4. Create the Metro Mirror relationship for MM_DBLog_Pri with the following settings:
– Master: MM_DBLog_Pri
– Auxiliary: MM_DBLog_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: MMREL2
– Consistency Group: CG_W2K3_MM

586 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Create the Metro Mirror relationship for MM_App_Pri with the following settings:
– Master: MM_App_Pri
– Auxiliary: MM_App_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: MMREL3

In the following section, we perform each step by using the CLI.

9.14.2 Creating a SAN Volume Controller partnership between ITSO_SVC2 and


ITSO_SVC4
We create the SVC partnership on both systems.

Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
the next step; instead, go to 9.14.3, “Creating a Metro Mirror Consistency Group” on
page 590.

Pre-verification
To verify that both systems can communicate with each other, use the
lspartnershipcandidate command.

As shown in Example 9-144, ITSO_SVC4 is an eligible SVC system candidate at ITSO_SVC2 for
the SVC system partnership, and vice versa. Therefore, both systems communicate with
each other.

Example 9-144 Listing the available SVC systems for partnership


IBM_2145:ITSO_SVC2:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2

IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006AC03A42 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2
000002006BE04FC4 no ITSO_SVC2

Example 9-145 on page 588 shows the output of the lspartnership and lssystem
commands before setting up the Metro Mirror relationship. We show them so that you can
compare with the same relationship after setting up the Metro Mirror relationship.

As of SVC 6.3, you can create a partnership between the SVC system and the IBM Storwize
V7000 system. Be aware that to create this partnership, you must change the layer
parameter on the IBM Storwize V7000 system. It must be changed from storage to
replication with the chsystem command.

This parameter cannot be changed on the SVC system. It is fixed to the value of appliance,
as shown in Example 9-145 on page 588.

Chapter 9. SAN Volume Controller operations using the command-line interface 587
Example 9-145 Pre-verification of system configuration
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local

IBM_2145:ITSO_SVC2:superuser>lssystem
id 000002006BE04FC4
name ITSO_SVC2
location local
partnership
bandwidth
total_mdisk_capacity 766.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 0.00MB
total_free_space 766.5GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 1.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB

588 Implementing the IBM System Storage SAN Volume Controller V7.4
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 766.50GB
has_nas_key no
layer appliance

IBM_2145:ITSO_SVC4:superuser>lssystem
id 0000020061C06FCA
name ITSO_SVC4
location local
partnership
bandwidth
total_mdisk_capacity 768.0GB
space_in_mdisk_grps 0
space_allocated_to_vdisks 0.00MB
total_free_space 768.0GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 0.00MB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.84:443
id_alias 0000020061C06FCA
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 0.00MB
tier_free_capacity 0.00MB

Chapter 9. SAN Volume Controller operations using the command-line interface 589
has_nas_key no
layer appliance

Partnership between clustered systems


In Example 9-146, a partnership is created between ITSO_SVC2 and ITSO_SVC4 that specifies
that 50 MBps bandwidth is to be used for the background copy.

To check the status of the newly created partnership, run the lspartnership command. Also,
the new partnership is only partially configured. It remains partially configured until the Metro
Mirror relationship is created on the other node.

Example 9-146 Creating the partnership from ITSO_SVC2 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

In Example 9-147, the partnership is created between ITSO_SVC4 back to ITSO_SVC2, which
specifies the bandwidth that is to be used for a background copy of 50 MBps.

After the partnership is created, verify that the partnership is fully configured on both systems
by reissuing the lspartnership command.

Example 9-147 Creating the partnership from ITSO_SVC4 to ITSO_SVC2 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 50

9.14.3 Creating a Metro Mirror Consistency Group


In Example 9-148, we create the Metro Mirror Consistency Group by using the
mkrcconsistgrp command. This Consistency Group is used for the Metro Mirror relationships
of the database volumes that are named MM_DB_Pri and MM_DBLog_Pri. The Consistency
Group is named CG_W2K3_MM.

Example 9-148 Creating the Metro Mirror Consistency Group CG_W2K3_MM


IBM_2145:ITSO_SVC2:superuser>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_MM
RC Consistency Group, id [0], successfully created

IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC2 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none

590 Implementing the IBM System Storage SAN Volume Controller V7.4
9.14.4 Creating the Metro Mirror relationships
In Example 9-149, we create the Metro Mirror relationships MMREL1 and MMREL2 for MM_DB_Pri
and MM_DBLog_Pri. Also, we make them members of the Metro Mirror Consistency Group
CG_W2K3_MM. We use the lsvdisk command to list all of the volumes in the ITSO_SVC2 system.

We then use the lsrcrelationshipcandidate command to show the volumes in the


ITSO_SVC4 system. By using this command, we check the possible candidates for MM_DB_Pri.
After checking all of these conditions, we use the mkrcrelationship command to create the
Metro Mirror relationship.

To verify the new Metro Mirror relationships, list them with the lsrcrelationship command.

Example 9-149 Creating Metro Mirror relationships MMREL1 and MMREL2


IBM_2145:ITSO_SVC2:superuser>lsvdisk -filtervalue name=MM*
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change
0 MM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000031 0 1 empty 0 0
no
1 MM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000032 0 1 empty 0 0
no
2 MM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000033 0 1 empty 0 0
no
IBM_2145:ITSO_SVC2:superuser>lsrcrelationshipcandidate
id vdisk_name
0 MM_DB_Pri
1 MM_DBLog_Pri
2 MM_App_Pri

IBM_2145:ITSO_SVC2:superuser>lsrcrelationshipcandidate -aux ITSO_SVC4 -master MM_DB_Pri


id vdisk_name
0 MM_DB_Sec
1 MM_DBLog_Sec
2 MM_App_Sec

IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_MM -name MMREL1
RC Relationship, id [0], successfully created
IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO_SVC4
-consistgrp CG_W2K3_MM -name MMREL2
RC Relationship, id [3], successfully created

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 MMREL1 000002006BE04FC4 ITSO_SVC2 0 MM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 MM_DB_Sec master 0 CG_W2K3_MM
inconsistent_stopped 50 0 metro none
3 MMREL2 000002006BE04FC4 ITSO_SVC2 3 MM_Log_Pri
0000020061C06FCA ITSO_SVC4 3 MM_Log_Sec master 0
CG_W2K3_MM inconsistent_stopped 50 0 metro none

Chapter 9. SAN Volume Controller operations using the command-line interface 591
9.14.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri
In Example 9-150, we create the stand-alone Metro Mirror relationship MMREL3 for MM_App_Pri.
After the stand-alone Metro Mirror relationship is created, we check the status of this Metro
Mirror relationship.

The state of MMREL3 is consistent_stopped. MMREL3 is in this state because it was created with
the -sync option. The -sync option indicates that the secondary (auxiliary) volume is
synchronized with the primary (master) volume. Initial background synchronization is skipped
when this option is used, even though the volumes are not synchronized in this scenario.

We want to show the option of pre-synchronized master and auxiliary volumes before the
relationship is set up. We created the relationship for MM_App_Sec by using the -sync option.

Tip: The -sync option is used only when the target volume mirrored all of the data from the
source volume. By using this option, there is no initial background copy between the
primary volume and the secondary volume.

MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.

Example 9-150 Creating a stand-alone relationship and verifying it


IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync
-cluster ITSO_SVC4 -name MMREL3
RC Relationship, id [2], successfully created

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship 2
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

592 Implementing the IBM System Storage SAN Volume Controller V7.4
9.14.6 Starting Metro Mirror
Now that the Metro Mirror Consistency Group and relationships are in place, we are ready to
use Metro Mirror relationships in our environment.

When Metro Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy for a data set if a failure occurs that affects the production site.

In this section, we show how to stop and start stand-alone Metro Mirror relationships and
Consistency Groups.

Starting a stand-alone Metro Mirror relationship


In Example 9-151, we start a stand-alone Metro Mirror relationship that is named MMREL3.
Because the Metro Mirror relationship was in the Consistent stopped state and no updates
were made to the primary volume, the relationship quickly enters the
consistent_synchronized state.

Example 9-151 Starting the stand-alone Metro Mirror relationship


IBM_2145:ITSO_SVC2:superuser>startrcrelationship MMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.14.7 Starting a Metro Mirror Consistency Group


In Example 9-152 on page 594, we start the Metro Mirror Consistency Group CG_W2K3_MM.
Because the Consistency Group was in the Inconsistent stopped state, it enters the
Inconsistent copying state until the background copy completes for all of the relationships in
the Consistency Group.

Upon completion of the background copy, it enters the consistent_synchronized state.

Chapter 9. SAN Volume Controller operations using the command-line interface 593
Example 9-152 Starting the Metro Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC2 0000020061C06FCA ITSO_SVC4
master inconsistent_copying 2 metro none

9.14.8 Monitoring the background copy progress


To monitor the background copy progress, we can use the lsrcrelationship command. This
command shows all of the defined Metro Mirror relationships if it is used without any
arguments. In the command output, progress indicates the current background copy
progress. Our Metro Mirror relationship is shown in Example 9-153.

Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Metro Mirror Consistency Groups or relationships change their state.

Example 9-153 Monitoring the background copy progress example


IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL1
id 0
name MMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 0
master_vdisk_name MM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name MM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 81
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL2
id 3
name MMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 3
master_vdisk_name MM_Log_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4

594 Implementing the IBM System Storage SAN Volume Controller V7.4
aux_vdisk_id 3
aux_vdisk_name MM_Log_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 82
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

When all Metro Mirror relationships complete the background copy, the Consistency Group
enters the consistent_synchronized state, as shown in Example 9-154.

Example 9-154 Listing the Metro Mirror Consistency Group


IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

9.14.9 Stopping and restarting Metro Mirror


In this section and in the following sections, we describe how to stop, restart, and change the
direction of the stand-alone Metro Mirror relationships and the Consistency Group.

9.14.10 Stopping a stand-alone Metro Mirror relationship


Example 9-155 on page 596 shows how to stop the stand-alone Metro Mirror relationship
while enabling access (write I/O) to the primary and secondary volumes. It also shows the
relationship entering the idling state.

Chapter 9. SAN Volume Controller operations using the command-line interface 595
Example 9-155 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary
IBM_2145:ITSO_SVC2:superuser>stoprcrelationship -access MMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.14.11 Stopping a Metro Mirror Consistency Group


Example 9-156 shows how to stop the Metro Mirror Consistency Group without specifying the
-access flag. The Consistency Group enters the consistent_stopped state.

Example 9-156 Stopping a Metro Mirror Consistency Group


IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

596 Implementing the IBM System Storage SAN Volume Controller V7.4
If we want to enable access (write I/O) to the secondary volume later, we reissue the
stoprcconsistgrp command and specify the -access flag. The Consistency Group changes
to the idling state, as shown in Example 9-157.

Example 9-157 Stopping a Metro Mirror Consistency Group and enabling access to the secondary
IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp -access CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

9.14.12 Restarting a Metro Mirror relationship in the Idling state


When you are restarting a Metro Mirror relationship in the Idling state, you must specify the
copy direction.
If any updates were performed on the master or the auxiliary volume, consistency is
compromised. Therefore, we must issue the command with the -force flag to restart a
relationship, as shown in Example 9-158.
Example 9-158 Restarting a Metro Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC2:superuser>startrcrelationship -primary master -force MMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync

Chapter 9. SAN Volume Controller operations using the command-line interface 597
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.14.13 Restarting a Metro Mirror Consistency Group in the Idling state


When you are restarting a Metro Mirror Consistency Group in the Idling state, the copy
direction must be specified.

If any updates were performed on the master or the auxiliary volume in any of the Metro
Mirror relationships in the Consistency Group, the consistency is compromised. Therefore, we
must use the -force flag to start a relationship. If the -force flag is not used, the command
fails.

In Example 9-159, we change the copy direction by specifying the auxiliary volumes to
become the primaries.

Example 9-159 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp -force -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

9.14.14 Changing the copy direction for Metro Mirror


In this section, we show how to change the copy direction of the stand-alone Metro Mirror
relationship and the Consistency Group.

598 Implementing the IBM System Storage SAN Volume Controller V7.4
9.14.15 Switching the copy direction for a Metro Mirror relationship
When a Metro Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the switchrcrelationship command, which
specifies the primary volume. If the specified volume is a primary when you issue this
command, the command has no effect.

In Example 9-160, we change the copy direction for the stand-alone Metro Mirror relationship
by specifying the auxiliary volume to become the primary volume.

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from the primary to the secondary because all of the I/O is
inhibited to that volume when it becomes the secondary. Therefore, careful planning is
required before the switchrcrelationship command is used.

Example 9-160 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC2:superuser>switchrcrelationship -primary aux MMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary aux
consistency_group_id
consistency_group_name

Chapter 9. SAN Volume Controller operations using the command-line interface 599
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.14.16 Switching the copy direction for a Metro Mirror Consistency Group
When a Metro Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the Consistency Group by using the switchrcconsistgrp
command and specifying the primary volume.

If the specified volume is already a primary volume when you issue this command, the
command has no effect.

In Example 9-161, we change the copy direction for the Metro Mirror Consistency Group by
specifying the auxiliary volume to become the primary volume.

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all of the I/O is inhibited when
that volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.

Example 9-161 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
IBM_2145:ITSO_SVC2:superuser>switchrcconsistgrp -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM

600 Implementing the IBM System Storage SAN Volume Controller V7.4
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

9.14.17 Creating a SAN Volume Controller partnership among clustered


systems
Starting with SVC 5.1, you can have a clustered system partnership among many SVC
systems. By using this capability, you can create the following configurations that use a
maximum of four connected systems:
򐂰 Star configuration
򐂰 Triangle configuration
򐂰 Fully connected configuration
򐂰 Daisy-chain configuration

In this section, we describe how to configure the SVC system partnership for each
configuration.

Important: To have a supported and working configuration, all SVC systems must be at
level 5.1 or higher.

In our scenarios, we configure the SVC partnership by referring to the clustered systems as
A, B, C, and D, as shown in the following examples:
򐂰 ITSO_SVC2 = A
򐂰 ITSO_SVC2 = B
򐂰 ITSO_SVC3 = C
򐂰 ITSO_SVC4 = D

Example 9-162 shows the available systems for a partnership by using the
lsclustercandidate command on each system.

Example 9-162 Available clustered systems


IBM_2145:ITSO_SVC2:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC2

IBM_2145:ITSO_SVC2:superuser>lspartnershipcandidate

Chapter 9. SAN Volume Controller operations using the command-line interface 601
id configured name
0000020061C06FCA no ITSO_SVC4
000002006BE04FC4 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3

IBM_2145:ITSO_SVC3:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC2
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC2

IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC2

9.14.18 Star configuration partnership


Figure 9-8 shows the star configuration.

Figure 9-8 Star configuration

Example 9-163 shows the sequence of mkpartnership commands that are run to create a
star configuration.

Example 9-163 Creating a star configuration by using the mkpartnership command


From ITSO_SVC2 to multiple systems

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC4

From ITSO_SVC2 to ITSO_SVC1

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC1

From ITSO_SVC3 to ITSO_SVC2

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC2

From ITSO_SVC4 to ITSO_SVC2

602 Implementing the IBM System Storage SAN Volume Controller V7.4
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC2

From ITSO_SVC2

IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
000002006AC03A42 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote fully_configured 50

From ITSO_SVC2

IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50

From ITSO_SVC3

IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 50

From ITSO_SVC4

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

Triangle configuration
Figure 9-9 shows the triangle configuration.

Figure 9-9 Triangle configuration

Chapter 9. SAN Volume Controller operations using the command-line interface 603
Example 9-164 shows the sequence of mkpartnership commands that are run to create a
triangle configuration.

Example 9-164 Creating a triangle configuration


From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC2


IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
000002006AC03A42 ITSO_SVC1 remote partially_configured_local 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50

From ITSO_SVC2 to ITSO_SVC2 and ITSO_SVC3

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50

From ITSO_SVC3 to ITSO_SVC2 and ITSO_SVC1

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC2


IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC2 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

Fully connected configuration


Figure 9-10 shows the fully connected configuration.

Figure 9-10 Fully connected configuration

Example 9-165 on page 605 shows the sequence of mkpartnership commands that are run
to create a fully connected configuration.

604 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-165 Creating a fully connected configuration
From ITSO_SVC2 to ITSO_SVC1, ITSO_SVC3 and ITSO_SVC4

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC2


IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
000002006AC03A42 ITSO_SVC1 remote partially_configured_local 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO_SVC2 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO_SVC3 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC3 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO-SVC4 to ITSO_SVC1, ITSO_SVC2 and ITSO_SVC3

IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC2


IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC2 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

Chapter 9. SAN Volume Controller operations using the command-line interface 605
Daisy-chain configuration
Figure 9-11 shows the daisy-chain configuration.

Figure 9-11 Daisy-chain configuration

Example 9-166 shows the sequence of mkpartnership commands that are run to create a
daisy-chain configuration.

Example 9-166 Creating a daisy-chain configuration


From ITSO_SVC2 to ITSO_SVC1

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
000002006AC03A42 ITSO_SVC1 remote partially_configured_local 50

From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3

IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50

From ITSO_SVC3 to ITSO_SVC2 and ITSO_SVC4

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC2


IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006AC03A42 ITSO_SVC2 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO_SVC4 to ITSO_SVC3

IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC3


IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

606 Implementing the IBM System Storage SAN Volume Controller V7.4
9.15 Global Mirror operation
In the following scenario, we set up an intercluster Global Mirror relationship between the
SVC system ITSO_SVC2 at the primary site and the SVC system ITSO_SVC4 at the secondary
site.

Intercluster example: This example is for an intercluster Global Mirror operation only. If
you want to set up an intracluster operation, we highlight the steps in the following
procedure that you do not need to perform.

Table 9-4 shows the details of the volumes.

Table 9-4 Details of the volumes for the Global Mirror relationship scenario
Content of volume Volumes at primary site Volumes at secondary site

Database files GM_DB_Pri GM_DB_Sec

Database log files GM_DBLog_Pri GM_DBLog_Sec

Application files GM_App_Pri GM_App_Sec

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a


Consistency Group to handle Global Mirror relationships for them. Because the application
files are independent of the database in this scenario, we create a stand-alone Global Mirror
relationship for GM_App_Pri. Figure 9-12 shows the Global Mirror relationship setup.

Figure 9-12 Global Mirror scenario

Chapter 9. SAN Volume Controller operations using the command-line interface 607
9.15.1 Setting up Global Mirror
In this section, we assume that the source and target volumes were created and that the ISLs
and zoning are in place to enable the SVC systems to communicate.

Complete the following steps to set up the Global Mirror:


1. Create an SVC partnership between ITSO_SVC2 and ITSO_SVC4 on both SVC clustered
systems with a bandwidth of 100 MBps.
2. Create a Global Mirror Consistency Group with the name CG_W2K3_GM.
3. Create the Global Mirror relationship for GM_DB_Pri that uses the following settings:
– Master: GM_DB_Pri
– Auxiliary: GM_DB_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: GMREL1
– Consistency Group: CG_W2K3_GM
4. Create the Global Mirror relationship for GM_DBLog_Pri that uses the following settings:
– Master: GM_DBLog_Pri
– Auxiliary: GM_DBLog_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: GMREL2
– Consistency Group: CG_W2K3_GM
5. Create the Global Mirror relationship for GM_App_Pri that uses the following settings:
– Master: GM_App_Pri
– Auxiliary: GM_App_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: GMREL3

In the following sections, we perform each step by using the CLI.

9.15.2 Creating a SAN Volume Controller partnership between ITSO_SVC2 and


ITSO_SVC4
We create an SVC partnership between these clustered systems.

Intracluster Global Mirror: If you are creating an intracluster Global Mirror, do not perform
the next step. Instead, go to 9.15.3, “Changing link tolerance and system delay simulation”
on page 610.

Pre-verification
To verify that both clustered systems can communicate with each other, use the
lspartnershipcandidate command. Example 9-167 confirms that our clustered systems can
communicate because ITSO_SVC4 is an eligible SVC system candidate to ITSO_SVC2 for the
SVC system partnership, and vice versa. Therefore, both systems can communicate with
each other.

Example 9-167 Listing the available SVC systems for partnership


IBM_2145:ITSO_SVC2:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate

608 Implementing the IBM System Storage SAN Volume Controller V7.4
id configured name
000002006BE04FC4 no ITSO_SVC2

In Example 9-168, we show the output of the lspartnership command before we set up the
SVC systems’ partnership for Global Mirror. We show this output for comparison after we set
up the SVC partnership.

Example 9-168 Pre-verification of the system configuration


IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local

Partnership between systems


In Example 9-169, we create the partnership from ITSO_SVC2 to ITSO_SVC4 and specify a 100
MBps bandwidth to use for the background copy. To verify the status of the new partnership,
we issue the lspartnership command. The new partnership is only partially configured. It
remains partially configured until we run the mkpartnership command on the other clustered
system.

Example 9-169 Creating the partnership from ITSO_SVC2 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC2:superuser>mkpartnership -bandwidth 100 ITSO_SVC4
IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 100

In Example 9-170, we create the partnership from ITSO_SVC4 back to ITSO_SVC2 and specify a
100 MBps bandwidth to use for the background copy. After the partnership is created, verify
that the partnership is fully configured by reissuing the lspartnership command.

Example 9-170 Creating the partnership from ITSO_SVC4 to ITSO_SVC2 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 100 ITSO_SVC2

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC2 remote fully_configured 100

IBM_2145:ITSO_SVC2:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC2 local
0000020061C06FCA ITSO_SVC4 remote fully_configured 100

Chapter 9. SAN Volume Controller operations using the command-line interface 609
9.15.3 Changing link tolerance and system delay simulation
The -gmlinktolerance parameter defines the sensitivity of the SVC to inter-link overload
conditions. The value is the number of seconds of continuous link difficulties that is tolerated
before the SVC stops the remote copy relationships to prevent affecting host I/O at the
primary site. To change the value, use the following command:
chsystem -gmlinktolerance link_tolerance

The link_tolerance value is 60 - 86400 seconds in increments of 10 seconds. The default


value for the link tolerance is 300 seconds. A value of 0 disables link tolerance.

Important: We strongly suggest that you use the default value. If the link is overloaded for
a period (which affects host I/O at the primary site), the relationships are stopped to protect
those hosts.

Intercluster and intracluster delay simulation


This Global Mirror feature permits a simulation of a delayed write to a remote volume. This
feature allows testing to be performed that detects colliding writes. You can use this feature to
test an application before the full deployment of the Global Mirror feature. The delay
simulation can be enabled separately for each intracluster or intercluster Global Mirror. To
enable this feature, run the following commands:
򐂰 For intercluster simulation, run this command:
chsystem -gminterdelaysimulation <inter_cluster_delay_simulation>
򐂰 For intracluster simulation, run this command:
chsystem -gmintradelaysimulation <intra_cluster_delay_simulation>

The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express


the amount of time (in milliseconds) that secondary I/Os are delayed for intercluster and
intracluster relationships. You can set a value of 0 - 100 milliseconds in 1-millisecond
increments for the inter_cluster_delay_simulation value or the
intra_cluster_delay_simulation value in the previous commands. A value of 0 disables the
feature.

To check the current settings for the delay simulation, use the following command:
lssystem

In Example 9-171, we show the modification of the delay simulation value and a change of
the Global Mirror link tolerance parameters. We also show the changed values of the Global
Mirror link tolerance and delay simulation parameters.

Example 9-171 Delay simulation and link tolerance modifications


IBM_2145:ITSO_SVC2:superuser>chsystem -gminterdelaysimulation 20
IBM_2145:ITSO_SVC2:superuser>chsystem -gmintradelaysimulation 40
IBM_2145:ITSO_SVC2:superuser>chsystem -gmlinktolerance 200
IBM_2145:ITSO_SVC2:superuser>lssystem
id 000002006BE04FC4
name ITSO_SVC2
location local
partnership
bandwidth
total_mdisk_capacity 866.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 30.00GB

610 Implementing the IBM System Storage SAN Volume Controller V7.4
total_free_space 836.5GB
total_vdiskcopy_capacity 30.00GB
total_used_capacity 30.00GB
total_overallocation 3
total_vdisk_capacity 30.00GB
total_allocated_extent_capacity 31.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 20
gm_intra_cluster_delay_simulation 40
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 736.50GB
has_nas_key no
layer appliance

9.15.4 Creating a Global Mirror Consistency Group


In Example 9-172, we create the Global Mirror Consistency Group by using the
mkrcconsistgrp command. We use this Consistency Group for the Global Mirror relationships
for the database volumes. The Consistency Group is named CG_W2K3_GM.

Example 9-172 Creating the Global Mirror Consistency Group CG_W2K3_GM


IBM_2145:ITSO_SVC2:superuser>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_GM
RC Consistency Group, id [0], successfully created
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp

Chapter 9. SAN Volume Controller operations using the command-line interface 611
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_GM 000002006BE04FC4 ITSO_SVC2 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none

9.15.5 Creating Global Mirror relationships


In Example 9-173, we create the GMREL1 and GMREL2 Global Mirror relationships for the
GM_DB_Pri and GM_DBLog_Pri volumes. We also make them members of the CG_W2K3_GM
Global Mirror Consistency Group.

We use the lsvdisk command to list all of the volumes in the ITSO_SVC2 system. Then, we
use the lsrcrelationshipcandidate command to show the possible candidate volumes for
GM_DB_Pri in ITSO_SVC4.

After checking all of these conditions, we use the mkrcrelationship command to create the
Global Mirror relationship. To verify the new Global Mirror relationships, we list them by using
the lsrcrelationship command.

Example 9-173 Creating GMREL1 and GMREL2 Global Mirror relationships


IBM_2145:ITSO_SVC2:superuser>lsvdisk -filtervalue name=GM*
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change
0 GM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000031 0 1 empty 0 0
no
1 GM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000032 0 1 empty 0 0
no
2 GM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000033 0 1 empty 0 0
no

IBM_2145:ITSO_SVC2:superuser>lsrcrelationshipcandidate -aux ITSO_SVC4 -master GM_DB_Pri


id vdisk_name
0 GM_DB_Sec
1 GM_DBLog_Sec
2 GM_App_Sec

IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [0], successfully created

IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [1], successfully created

IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [2], successfully created
IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4
-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [3], successfully created
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode

612 Implementing the IBM System Storage SAN Volume Controller V7.4
0 GMREL1 000002006BE04FC4 ITSO_SVC2 0 GM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 GM_DB_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none
1 GMREL2 000002006BE04FC4 ITSO_SVC2 1 GM_DBLog_Pri 0000020061C06FCA
ITSO_SVC4 1 GM_DBLog_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none

9.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri


In Example 9-174, we create the stand-alone Global Mirror relationship GMREL3 for
GM_App_Pri. After the stand-alone Global Mirror relationship GMREL3 is created, we check the
status of each of our Global Mirror relationships.

The status of GMREL3 is consistent_stopped because it was created with the -sync option. The
-sync option indicates that the secondary (auxiliary) volume is synchronized with the primary
(master) volume. The initial background synchronization is skipped when this option is used.

GMREL1 and GMREL2 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.

Example 9-174 Creating a stand-alone Global Mirror relationship and verifying it


IBM_2145:ITSO_SVC2:superuser>mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO_SVC4 -sync
-name GMREL3 -global
RC Relationship, id [2], successfully created

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress:copy_type:cycling_mode
0:GMREL1:000002006BE04FC4:ITSO_SVC2:0:GM_DB_Pri:0000020061C06FCA:ITSO_SVC4:0:GM_DB_Sec:master:0:CG_W2K3_GM:
inconsistent_copying:50:73:global:none
1:GMREL2:000002006BE04FC4:ITSO_SVC2:1:GM_DBLog_Pri:0000020061C06FCA:ITSO_SVC4:1:GM_DBLog_Sec:master:0:CG_W2
K3_GM:inconsistent_copying:50:75:global:none
2:GMREL3:000002006BE04FC4:ITSO_SVC2:2:GM_App_Pri:0000020061C06FCA:ITSO_SVC4:2:GM_App_Sec:master:::consisten
t_stopped:50:100:global:none

9.15.7 Starting Global Mirror


Now that the Global Mirror Consistency Group and relationships are created, we are ready to
use the Global Mirror relationships in our environment.

When Global Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy if a hardware failure occurs that affects the SAN at the
production site.

In this section, we show how to start the stand-alone Global Mirror relationships and the
Consistency Group.

Chapter 9. SAN Volume Controller operations using the command-line interface 613
9.15.8 Starting a stand-alone Global Mirror relationship
In Example 9-175, we start the stand-alone Global Mirror relationship that is named GMREL3.
Because the Global Mirror relationship was in the Consistent stopped state and no updates
were made to the primary volume, the relationship quickly enters the Consistent synchronized
state.

Example 9-175 Starting the stand-alone Global Mirror relationship


IBM_2145:ITSO_SVC2:superuser>startrcrelationship GMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.15.9 Starting a Global Mirror Consistency Group


In Example 9-176, we start the CG_W2K3_GM Global Mirror Consistency Group. Because the
Consistency Group was in the Inconsistent stopped state, it enters the Inconsistent copying
state until the background copy completes for all of the relationships that are in the
Consistency Group.

Upon the completion of the background copy, the CG_W2K3_GM Global Mirror Consistency
Group enters the Consistent synchronized state.

Example 9-176 Starting the Global Mirror Consistency Group


IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp 0
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master

614 Implementing the IBM System Storage SAN Volume Controller V7.4
state inconsistent_copying
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.15.10 Monitoring the background copy progress


To monitor the background copy progress, use the lsrcrelationship command. This
command shows us all of the defined Global Mirror relationships if it is used without any
parameters. In the command output, progress indicates the current background copy
progress. Example 9-177 shows our Global Mirror relationships.

Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Global Mirror Consistency Groups or relationships change state.

Example 9-177 Example of monitoring the background copy progress


IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 38
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2

Chapter 9. SAN Volume Controller operations using the command-line interface 615
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 76
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

When all of the Global Mirror relationships complete the background copy, the Consistency
Group enters the Consistent synchronized state, as shown in Example 9-178.

Example 9-178 Listing the Global Mirror Consistency Group


IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.15.11 Stopping and restarting Global Mirror


Now that the Global Mirror Consistency Group and relationships are running, we describe
how to stop, restart, and change the direction of the stand-alone Global Mirror relationships
and the Consistency Group.

First, we show how to stop and restart the stand-alone Global Mirror relationships and the
Consistency Group.

616 Implementing the IBM System Storage SAN Volume Controller V7.4
9.15.12 Stopping a stand-alone Global Mirror relationship
In Example 9-179, we stop the stand-alone Global Mirror relationship while we enable access
(write I/O) to the primary and the secondary volume. As a result, the relationship enters the
Idling state.

Example 9-179 Stopping the stand-alone Global Mirror relationship


IBM_2145:ITSO_SVC2:superuser>stoprcrelationship -access GMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.15.13 Stopping a Global Mirror Consistency Group


In Example 9-180, we stop the Global Mirror Consistency Group without specifying the
-access parameter. Therefore, the Consistency Group enters the Consistent stopped state.

Example 9-180 Stopping a Global Mirror Consistency Group without specifying the -access parameter
IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global

Chapter 9. SAN Volume Controller operations using the command-line interface 617
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

If we want to enable access (write I/O) for the secondary volume later, we can reissue the
stoprcconsistgrp command and specify the -access parameter. The Consistency Group
changes to the Idling state, as shown in Example 9-181.

Example 9-181 Stopping a Global Mirror Consistency Group


IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp -access CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.15.14 Restarting a Global Mirror relationship in the Idling state


When a Global Mirror relationship is restarted in the Idling state, we must specify the copy
direction.

If any updates were performed on the master volume or the auxiliary volume, consistency is
compromised. Therefore, we must issue the -force parameter to restart the relationship, as
shown in Example 9-182. If the -force parameter is not used, the command fails.

Example 9-182 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC2:superuser>startrcrelationship -primary master -force GMREL3
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master

618 Implementing the IBM System Storage SAN Volume Controller V7.4
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.15.15 Restarting a Global Mirror Consistency Group in the Idling state


When a Global Mirror Consistency Group is restarted in the Idling state, we must specify the
copy direction.

If any updates were performed on the master volume or the auxiliary volume in any of the
Global Mirror relationships in the Consistency Group, consistency is compromised. Therefore,
we must issue the -force parameter to start the relationship. If the -force parameter is not
used, the command fails.

In Example 9-183, we restart the Consistency Group and change the copy direction by
specifying the auxiliary volumes to become the primaries.

Example 9-183 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.15.16 Changing the direction for Global Mirror


In this section, we show how to change the copy direction of the stand-alone Global Mirror
relationships and the Consistency Group.

Chapter 9. SAN Volume Controller operations using the command-line interface 619
9.15.17 Switching the copy direction for a Global Mirror relationship
When a Global Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the switchrcrelationship command and
specifying the primary volume.

If the volume that is specified as the primary already is a primary when this command is run,
the command has no effect.

In Example 9-184, we change the copy direction for the stand-alone Global Mirror relationship
and specify the auxiliary volume to become the primary.

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O is inhibited to that
volume when it becomes the secondary. Therefore, careful planning is required before the
switchrcrelationship command is used.

Example 9-184 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

IBM_2145:ITSO_SVC2:superuser>switchrcrelationship -primary aux GMREL3


IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2

620 Implementing the IBM System Storage SAN Volume Controller V7.4
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

9.15.18 Switching the copy direction for a Global Mirror Consistency Group
When a Global Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the relationship by using the switchrcconsistgrp command
and specifying the primary volume. If the volume that is specified as the primary already is a
primary when this command is run, the command has no effect.

In Example 9-185, we change the copy direction for the Global Mirror Consistency Group and
specify the auxiliary to become the primary.

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O are inhibited when that
volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.

Example 9-185 Switching the copy direction for a Global Mirror Consistency Group
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
IBM_2145:ITSO_SVC2:superuser>switchrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM

Chapter 9. SAN Volume Controller operations using the command-line interface 621
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.15.19 Changing a Global Mirror relationship to the cycling mode


Starting with SVC 6.3, Global Mirror can operate with or without cycling. When operating
without cycling, write operations are applied to the secondary volume as soon as possible
after they are applied to the primary volume. The secondary volume often is less than
1 second behind the primary volume, which minimizes the amount of data that must be
recovered in a failover. However, this capability requires that a high-bandwidth link is
provisioned between the two sites.

When Global Mirror operates in cycling mode, changes are tracked and, where needed,
copied to intermediate Change Volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered if a failover occurs. However, lower bandwidth is required to provide
an effective solution because the data transfer can be smoothed over a longer period.

A Global Mirror relationship consists of two volumes: primary and secondary. With SVC 6.3,
each of these volumes can be associated to a Change Volume. Change Volumes are used to
record changes to the remote copy volume. A FlashCopy relationship exists between the
remote copy volume and the Change Volume. This relationship cannot be manipulated as a
normal FlashCopy relationship. Most commands fail by design because this relationship is an
internal relationship.

Cycling mode transmits a series of FlashCopy images from the primary to the secondary, and
it is enabled by using svctask chrcrelationship -cycling=multi.

The primary Change Volume stores changes to be sent to the secondary volume and the
secondary Change Volume is used to maintain a consistent image at the secondary volume.
Every x seconds, the primary FlashCopy mapping is started automatically, where x is the
cycling period and is configurable. Data is then copied to the secondary volume from the
primary Change Volume. The secondary FlashCopy mapping is started if resynchronization is
needed. Therefore, a consistent copy is always at the secondary volume. The cycling period
is configurable and the default value is 300 seconds.

The recovery point objective (RPO) depends on how long the FlashCopy takes to complete. If
the FlashCopy completes within the cycling time, the maximum RPO = 2 x the cycling time;
otherwise, the RPO = 2 x the copy completion time.

622 Implementing the IBM System Storage SAN Volume Controller V7.4
You can estimate the current RPO by using the new freeze_time rcrelationship property. It is
the time of the last consistent image that is present at the secondary. Figure 9-13 shows the
cycling mode with Change Volumes.

Figure 9-13 Global Mirror with Change Volumes

Change Volume requirements


Adhere to the following rules for the Change Volume:
򐂰 The Change Volume can be a thin-provisioned volume.
򐂰 It must be the same size as the primary and secondary volumes.
򐂰 The Change Volume must be in the same I/O Group as the primary and secondary
volumes.
򐂰 It cannot be used for the user’s remote copy or FlashCopy mappings.
򐂰 You must have a Change Volume for both the primary and secondary volumes.
򐂰 You cannot manipulate it like a normal FlashCopy mapping.

In this section, we show how to change the cycling mode of the stand-alone Global Mirror
relationship (GMREL3) and the Consistency Group CG_W2K3_GM Global Mirror relationships
(GMREL1 and GMREL2).

We assume that the source and target volumes were created and that the ISLs and zoning
are in place to enable the SVC systems to communicate. We also assume that the Global
Mirror relationship was established.

Complete the following steps to change the Global Mirror to cycling mode with Change
Volumes:
1. Create thin-provisioned Change Volumes for the primary and secondary volumes at both
sites.
2. Stop the stand-alone relationship GMREL3 to change the cycling mode at the primary site.
3. Set the cycling mode on the stand-alone relationship GMREL3 at the primary site.
4. Set the Change Volume on the master volume relationship GMREL3 at the primary site.
5. Set the Change Volume on the auxiliary volume relationship GMREL3 at the secondary site.
6. Start the stand-alone relationship GMREL3 in cycling mode at the primary site.
7. Stop the Consistency Group CG_W2K3_GM to change the cycling mode at the primary site.
8. Set the cycling mode on the Consistency Group at the primary site.

Chapter 9. SAN Volume Controller operations using the command-line interface 623
9. Set the Change Volume on the master volume relationship GMREL1 of the Consistency
Group CG_W2K3_GM at the primary site.
10.Set the Change Volume on the auxiliary volume relationship GMREL1 at the secondary site.
11.Set the Change Volume on the master volume relationship GMREL2 of the Consistency
Group CG_W2K3_GM at the primary site.
12.Set the Change Volume on the auxiliary volume relationship GMREL2 at the secondary site.
13.Start the Consistency Group CG_W2K3_GM in the cycling mode at the primary site.

9.15.20 Creating the thin-provisioned Change Volumes


We start the setup process by creating thin-provisioned Change Volumes for the primary and
secondary volumes at both sites, as shown in Example 9-186.

Example 9-186 Creating the thin-provisioned volumes for Global Mirror cycling mode
IBM_2145:ITSO_SVC2:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_DB_Pri_CHANGE_VOL
Virtual Disk, id [3], successfully created
IBM_2145:ITSO_SVC2:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_DBLog_Pri_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC2:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_App_Pri_CHANGE_VOL
Virtual Disk, id [5], successfully created

IBM_2145:ITSO_SVC4:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize


20% -autoexpand -grainsize 32 -name GM_DB_Sec_CHANGE_VOL
Virtual Disk, id [3], successfully created
IBM_2145:ITSO_SVC4:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_DBLog_Sec_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC4:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_App_Sec_CHANGE_VOL
Virtual Disk, id [5], successfully created

9.15.21 Stopping the stand-alone remote copy relationship


We now display the remote copy relationships to ensure that they are in sync. Then, we stop
the stand-alone relationship GMREL3, as shown in Example 9-187.

Example 9-187 Stopping the remote copy stand-alone relationship


IBM_2145:ITSO_SVC2:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name
aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary
consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 GMREL1 000002006BE04FC4 ITSO_SVC2 0 GM_DB_Pri
0000020061C06FCA ITSO_SVC4 0 GM_DB_Sec aux 0
CG_W2K3_GM consistent_synchronized 50 global
none
1 GMREL2 000002006BE04FC4 ITSO_SVC2 1 GM_DBLog_Pri
0000020061C06FCA ITSO_SVC4 1 GM_DBLog_Sec aux 0

624 Implementing the IBM System Storage SAN Volume Controller V7.4
CG_W2K3_GM consistent_synchronized 50 global
none
2 GMREL3 000002006BE04FC4 ITSO_SVC2 2 GM_App_Pri
0000020061C06FCA ITSO_SVC4 2 GM_App_Sec aux
consistent_synchronized 50 global none

IBM_2145:ITSO_SVC2:superuser>stoprcrelationship GMREL3

9.15.22 Setting the cycling mode on the stand-alone remote copy relationship
In Example 9-188, we set the cycling mode on the relationship by using the
chrcrelationship command. The cyclingmode and masterchange parameters cannot be
entered in the same command.

Example 9-188 Setting the cycling mode


IBM_2145:ITSO_SVC2:superuser>chrcrelationship -cyclingmode multi GMREL3

9.15.23 Setting the Change Volume on the master volume


In Example 9-189, we set the Change Volume for the primary volume. A display shows the
name of the master Change Volume.

Example 9-189 Setting the Change Volume


IBM_2145:ITSO_SVC2:superuser>chrcrelationship -masterchange GM_App_Pri_CHANGE_VOL

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name

Chapter 9. SAN Volume Controller operations using the command-line interface 625
9.15.24 Setting the Change Volume on the auxiliary volume
In Example 9-190, we set the Change Volume on the auxiliary volume in the secondary site.
From the display, we can see the name of the volume.

Example 9-190 Setting the Change Volume on the auxiliary volume


IBM_2145:ITSO_SVC4:superuser>chrcrelationship -auxchange GM_App_Sec_CHANGE_VOL 2
IBM_2145:ITSO_SVC4:superuser>
IBM_2145:ITSO_SVC4:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

9.15.25 Starting the stand-alone relationship in the cycling mode


In Example 9-191, we start the stand-alone relationship GMREL3. After a few minutes, we
check the freeze_time parameter to see how it changes.

Example 9-191 Starting the stand-alone relationship in the cycling mode


IBM_2145:ITSO_SVC2:superuser>startrcrelationship GMREL3

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id

626 Implementing the IBM System Storage SAN Volume Controller V7.4
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/37/20
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/42/25
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

9.15.26 Stopping the Consistency Group to change the cycling mode


In Example 9-192, we stop the Consistency Group with two relationships. You must stop it to
change Global Mirror to cycling mode. A display shows that the state of the Consistency
Group changes to consistent_stopped.

Example 9-192 Stopping the Consistency Group to change the cycling mode
IBM_2145:ITSO_SVC2:superuser>stoprcconsistgrp CG_W2K3_GM

IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA

Chapter 9. SAN Volume Controller operations using the command-line interface 627
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.15.27 Setting the cycling mode on the Consistency Group


In Example 9-193, we change the cycling mode of the Consistency Group CG_W2K3_GM. To
change the cycling mode of the Consistency Group, we must stop the Consistency Group;
otherwise, the command fails.

Example 9-193 Setting the Global Mirror cycling mode on the Consistency Group
IBM_2145:ITSO_SVC2:superuser>chrcconsistgrp -cyclingmode multi CG_W2K3_GM

IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.15.28 Setting the Change Volume on the master volume relationships of the
Consistency Group
In Example 9-194 on page 629, we change both of the relationships of the Consistency
Group to add the Change Volumes on the primary volumes. A display shows the name of the
master Change Volumes.

628 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-194 Setting the Change Volume on the master volume
IBM_2145:ITSO_SVC2:superuser>chrcrelationship -masterchange GM_DB_Pri_CHANGE_VOL
GMREL1

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC2:superuser>

IBM_2145:ITSO_SVC2:superuser>chrcrelationship -masterchange GM_DBLog_Pri_CHANGE_VOL GMREL2

IBM_2145:ITSO_SVC2:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL

Chapter 9. SAN Volume Controller operations using the command-line interface 629
aux_change_vdisk_id
aux_change_vdisk_name

9.15.29 Setting the Change Volumes on the auxiliary volumes


In Example 9-195, we change both of the relationships of the Consistency Group to add the
Change Volumes to the secondary volumes. The display shows the names of the auxiliary
Change Volumes.

Example 9-195 Setting the Change Volumes on the auxiliary volumes


IBM_2145:ITSO_SVC4:superuser>chrcrelationship -auxchange GM_DB_Sec_CHANGE_VOL GMREL1
IBM_2145:ITSO_SVC4:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id 3
aux_change_vdisk_name GM_DB_Sec_CHANGE_VOL

IBM_2145:ITSO_SVC4:superuser>chrcrelationship -auxchange GM_DBLog_Sec_CHANGE_VOL GMREL2


IBM_2145:ITSO_SVC4:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time

630 Implementing the IBM System Storage SAN Volume Controller V7.4
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
aux_change_vdisk_id 4
aux_change_vdisk_name GM_DBLog_Sec_CHANGE_VOL

9.15.30 Starting the Consistency Group CG_W2K3_GM in the cycling mode


In Example 9-196, we start the Consistency Group in the cycling mode. Looking at the field
freeze_time, you can see that the Consistency Group was started in the cycling mode and
that it is taking consistency images.

Example 9-196 Starting the Consistency Group with cycling mode


IBM_2145:ITSO_SVC2:superuser>startrcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/02/33
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

IBM_2145:ITSO_SVC2:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC2
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/07/42
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi

Chapter 9. SAN Volume Controller operations using the command-line interface 631
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

9.16 Service and maintenance


In this section, we describe the various service and maintenance tasks that you can run
within the SVC environment.

9.16.1 Upgrading software


In this section, we describe how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is four positive integers that are separated by
periods. For example, a software upgrade package is similar to 7.4.0.0, and each software
package is assigned a unique number.

Important: The support for migration from 6.3.x.x to 7.4.x.x is limited. Check with your
service representative for the recommended steps.

For more information about the recommended software levels, see this website:
http://www.ibm.com/systems/storage/software/virtualization/svc/index.html

SAN Volume Controller software upgrade test utility


The SVC Software Upgrade Test Utility, which is on the Master Console, checks the software
levels in the system against the recommended levels, which are documented on the support
website. You are informed if the software levels are current or if you must download and install
newer levels. You can download the utility and installation instructions from this website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

After the software file is uploaded to the system (the /home/superuser/upgrade directory),
you can select the software and apply it to the system. Use the web script and the
applysoftware command. When a new code level is applied, it is automatically installed on all
of the nodes within the system.

The underlying command-line tool runs the sw_preinstall script. This script checks the
validity of the upgrade file and whether it can be applied over the current level. If the upgrade
file is unsuitable, the sw_preinstall script deletes the files to prevent the buildup of invalid
files on the system.

632 Implementing the IBM System Storage SAN Volume Controller V7.4
Precaution before you perform the upgrade
Software installation often is considered to be a client’s task. The SVC supports concurrent
software upgrade. You can perform the software upgrade concurrently with I/O user
operations and certain management activities. However, only limited CLI commands are
operational from the time that the installation command starts until the upgrade operation
ends successfully or is backed out. Certain commands fail with a message that indicates that
a software upgrade is in progress.

Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications might have I/O failures during the software upgrade.
Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem
Device Driver (SDD) command. Example 9-197 shows the output.

Example 9-197 Query adapter


#datapath query adapter
Active Adapters :2

Adpt# Name State Mode Select Errors Paths Active


0 fscsi0 NORMAL ACTIVE 1445 0 4 4
1 fscsi1 NORMAL ACTIVE 1888 0 4 4

#datapath query device


Total Devices : 2

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized


SERIAL: 60050768018201BF2800000000000000
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 0 0
1 fscsi1/hdisk7 OPEN NORMAL 972 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized


SERIAL: 60050768018201BF2800000000000002
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk4 OPEN NORMAL 784 0
1 fscsi1/hdisk8 OPEN NORMAL 0 0

Write-through mode: During a software upgrade, periods occur when not all of the nodes
in the system are operational. As a result, the cache operates in write-through mode.
Write-through mode affects the throughput, latency, and bandwidth aspects of
performance.

Verify that your uninterruptible power supply unit configuration is also set up correctly (even if
your system is running without problems). Specifically, ensure that the following conditions
are true:
򐂰 Your uninterruptible power supply units are all getting their power from an external source
and that they are not daisy chained. Ensure that each uninterruptible power supply unit is
not supplying power to another node’s uninterruptible power supply unit.
򐂰 The power cable and the serial cable, which come from each node, go back to the same
uninterruptible power supply unit. If the cables are crossed and go back to separate
uninterruptible power supply units, another node might also be shut down mistakenly
during the upgrade while one node is shut down.

Chapter 9. SAN Volume Controller operations using the command-line interface 633
Important: Do not share the SVC uninterruptible power supply unit with any other devices.

You must also ensure that all I/O paths are working for each host that runs I/O operations to
the SAN during the software upgrade. You can check the I/O paths by using the datapath
query commands.

You do not need to check for hosts that have no active I/O operations to the SAN during the
software upgrade.

Upgrade procedure
To upgrade the SVC system software, complete the following steps:
1. Before the upgrade is started, you must back up the configuration (9.17, “Backing up the
SAN Volume Controller system configuration” on page 647) and save the backup
configuration file in a safe place.
2. Before you start to transfer the software code to the clustered system, clear the previously
uploaded upgrade files in the /home/superuser/upgrade SVC system directory, as shown
in Example 9-198.

Example 9-198 cleardumps -prefix /home/superuser/upgrade command


IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /home/superuser/upgrade
IBM_2145:ITSO_SVC2:superuser>

3. Save the data collection for support diagnosis if you experience problems, as shown in
Example 9-199.

Example 9-199 svc_snap -c command


IBM_2145:ITSO_SVC2:superuser>svc_snap -c
Collecting system information...
Creating Config Backup
Dumping error log...
Creating
Snap data collected in /dumps/snap.110711.111003.111031.tgz

4. List the dump that was generated by the previous command, as shown in Example 9-200.

Example 9-200 lsdumps command


IBM_2145:ITSO_SVC2:superuser>lsdumps
id filename
0 svc.config.cron.bak_108283
1 sel.110711.trc
2 rtc.race_mq_log.txt.110711.trc
3 ethernet.110711.trc
4 svc.config.cron.bak_110711
5 svc.config.cron.xml_110711
6 svc.config.cron.log_110711
7 svc.config.cron.sh_110711
8 svc.config.backup.bak_110711
9 svc.config.backup.tmp.xml
10 110711.trc
11 svc.config.backup.xml_110711

634 Implementing the IBM System Storage SAN Volume Controller V7.4
12 svc.config.backup.now.xml
13 snap.110711.111003.111031.tgz

5. Save the generated dump in a safe place by using the pscp command, as shown in
Figure 9-14.

Example 9-201 pscp -load command


C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC2
superuser@10.18.228.173:/dumps/snap
.110711.111003.111031.tgz c:snap.110711.111003.111031.tgz
snap.110711.111003.111031 | 4999 kB | 4999.8 kB/s | ETA: 00:00:00 |
100%

Note: The pscp command does not work if you did not upload your PuTTY SSH private
key or if you are not using the user ID and password with the PuTTY pageant agent, as
shown in Figure 9-14.

Figure 9-14 Pageant example

6. Upload the new software package by using PuTTY Secure Copy. Enter the pscp -load
command, as shown in Example 9-202.

Example 9-202 pscp -load command


C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC2 c:\IBM2145_INSTALL_7.4.0.0.
superuser@10.18.228.81:/home/superuser/upgrade
110926.tgz.gpg | 353712 kB | 11053.5 kB/s | ETA: 00:00:00 | 100%

7. Upload the SVC Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the
command, as shown in Example 9-203.

Example 9-203 Upload utility


C:\>pscp -load ITSO_SVC2 IBM2145_INSTALL_upgradetest_12.31
superuser@10.18.229.81:/home/superuser/upgrade
IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%

Chapter 9. SAN Volume Controller operations using the command-line interface 635
8. Verify that the packages were successfully delivered through the PuTTY command-line
application by entering the lsdumps command, as shown in Example 9-204.

Example 9-204 lsdumps command


IBM_2145:ITSO_SVC2:superuser>lsdumps -prefix /home/admin/update
id filename
0 IBM2145_INSTALL_7.4.0.0
1 IBM2145_INSTALL_upgradetest_12.31

9. Now that the packages are uploaded, install the SVC Software Upgrade Test Utility, as
shown in Example 9-205.

Example 9-205 applysoftware command

IBM_2145:ITSO_SVC2:superuser>applysoftware -file
IBM2145_INSTALL_upgradetest_12.31
CMMVC6227I The package installed successfully.

10.Using the svcupgradetest command, test the upgrade for known issues that might prevent
a software upgrade from completing successfully, as shown in Example 9-206.

Example 9-206 svcupgradetest command


IBM_2145:ITSO_SVC2:superuser>svcupgradetest -v 7.4.0.0
svcupgradetest version 12.31 Please wait while the tool tests
for issues that may prevent a software upgrade from completing
successfully. The test will take approximately one minute to complete.
The test has not found any problems with the 2145 cluster.
Please proceed with the software upgrade.

Important: If the svcupgradetest command produces any errors, troubleshoot the


errors by using the maintenance procedures before you continue.

11.Use the applysoftware command to apply the software upgrade, as shown in


Example 9-207.

Example 9-207 Applysoftware upgrade command example


IBM_2145:ITSO_SVC2:superuser>applysoftware -file IBM2145_INSTALL_7.4.0.0

While the upgrade runs, you can check the status, as shown in Example 9-208.

Example 9-208 Checking the update status


IBM_2145:ITSO_SVC2:superuser>lsupdate
lsupdate
status system_updating
event_sequence_number
progress 50
estimated_completion_time 140522093020
suggested_action wait
system_new_code_level 7.4.0.1 (build 99.2.141022001)
system_forced no
system_next_node_status updating
system_next_node_time
system_next_node_id 2

636 Implementing the IBM System Storage SAN Volume Controller V7.4
system_next_node_name
node2

The new code is distributed and applied to each node in the SVC system:
– During the upgrade, you can issue the lsupdate command to see the status of the
upgrade.
– To verify that the upgrade was successful, you can run the lssystem and lsnodevpd
commands, as shown in Example 9-209. (We truncated the lssystem and lsnodevpd
information for this example.)

Example 9-209 lssystem and lsnodevpd commands


IBM_2145:ITSO_SVC2:superuser>lssystem
id 000002007F600A10
...
...
name ITSO_SVC2
location local
partnership
total_mdisk_capacity 825.0GB
space_in_mdisk_grps 571.0GB
space_allocated_to_vdisks 75.05GB
total_free_space 750.0GB
total_vdiskcopy_capacity 85.00GB
total_used_capacity 75.00GB
total_overallocation 10
total_vdisk_capacity 75.00GB
total_allocated_extent_capacity 81.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.4.0.0 (build 103.11.1410200000)

IBM_2145:ITSO_SVC2:superuser>lsnodevpd 1
id 1
...
...
system code level: 4 fields
id 1
node_name ITSO_SVCN1
WWNN 0x500507680c000508
code_level 7.4.0.0 (build 103.11.1410200000)

The required tasks to upgrade the SVC software are complete.

9.16.2 Running the maintenance procedures


Use the finderr command to generate a list of any unfixed errors in the system. This
command analyzes the last generated log that is in the /dumps/elogs/ directory on the
system.

To generate a new log before you analyze unfixed errors, run the dumperrlog command, as
shown in Example 9-210 on page 638.

Chapter 9. SAN Volume Controller operations using the command-line interface 637
Example 9-210 dumperrlog command
IBM_2145:ITSO_SVC2:superuser>dumperrlog

This command generates an errlog_timestamp file, such as errlog_110711_111003_090500:


򐂰 errlog is part of the default prefix for all event log files.
򐂰 110711 is the panel name of the current configuration node.
򐂰 111003 is the date (YYMMDD).
򐂰 090500 is the time (HHMMSS).

You can add the -prefix parameter to your command to change the default prefix of errlog
to something else, as shown in Example 9-211.

Example 9-211 dumperrlog -prefix command


IBM_2145:ITSO_SVC2:superuser>dumperrlog -prefix ITSO_SVC2_errlog

This command creates a file that is called ITSO_SVC2_errlog_110711_111003_141111.

To see the file name, enter the command that is shown in Example 9-212.

Example 9-212 lsdumps -prefix command


IBM_2145:ITSO_SVC2:superuser>lsdumps -prefix /dumps/elogs
id filename
0 errlog_110711_111003_111056
1 testerrorlog_110711_111003_135358
2 ITSO_SVC2_errlog_110711_111003_141111

Maximum number of event log dump files: A maximum of 10 event log dump files per
node are kept on the system. When the 11th dump is made, the oldest existing dump file
for that node is overwritten. The directory might also hold log files that are retrieved from
other nodes. These files are not counted.

The SVC deletes the oldest file (when necessary) for this node to maintain the maximum
number of files. The SVC does not delete files from other nodes unless you issue the
cleardumps command.

After you generate your event log, you can issue the finderr command to scan the event log
for any unfixed events, as shown in Example 9-213.

Example 9-213 finderr command


IBM_2145:ITSO_SVC2:superuser>finderr
Highest priority unfixed error code is [1550]

As you can see, we have one unfixed event on our system. To analyze this event, we
download it onto our personal computer. To know more about this unfixed event, we review
the event log in more detail. We use the PuTTY Secure Copy process to copy the file from the
system to our local management workstation, as shown in Example 9-214.

Example 9-214 Using the pscp command to copy event logs off from the SVC
In W2K3 → Start → Run → cmd

C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC2 superuser@10.18.228.81:/dumps/elog


s/ITSO_SVC2_errlog_110711_111003_141111 c:\ITSO_SVC2_errlog_110711_111003_141111

638 Implementing the IBM System Storage SAN Volume Controller V7.4
ITSO_SVC2_errlog_110711_1 | 6 kB | 6.8 kB/s | ETA: 00:00:00 | 100%

C:\Program Files (x86)\PuTTY>

To use the Run option, you must know the location of your pscp.exe file. In our case, it is in
the C:\Program Files (x86)\PuTTY folder.

This command copies the file that is called ITSO_SVC2_errlog_110711_111003_141111 to the


C:\ directory on our local workstation and calls the file
ITSO_SVC2_errlog_110711_111003_141111. Open the file in WordPad. (Notepad does not
format the window.) You see information that is similar to the information that is shown in
Example 9-215. (We truncated this list for this example.)

Example 9-215 errlog in WordPad


//-------------------
// Error Log Entries
//-------------------

Error Log Entry 0


Node Identifier : SVC1N2
Object Type : node
Object ID : 2
Copy ID :
Sequence Number : 101
Root Sequence Number : 101
First Error Timestamp: Mon Oct 3 10:50:13 2011
: Epoch + 1317664213
Last Error Timestamp : Mon Oct 3 10:50:13 2011
: Epoch + 1317664213
Error Count : 1
Error ID : 980221 : Error log cleared
Error Code :
Status Flag : SNMP trap raised
Type Flag : INFORMATION

00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

By scrolling through the list or searching for the term “unfixed”, you can find more information
about the problem. You might see more entries in the error log that have the status of unfixed.

After fixing the problem, you can mark the event as fixed in the log by issuing the cherrstate
command against its sequence number, as shown in Example 9-216.

Example 9-216 cherrstate command


IBM_2145:ITSO_SVC2:superuser>cherrstate -sequencenumber 106

Chapter 9. SAN Volume Controller operations using the command-line interface 639
If you accidentally mark the wrong event as fixed, you can mark it as unfixed again by
entering the same command and appending the -unfix flag to the end, as shown in
Example 9-217.

Example 9-217 Using the -unfix flag


IBM_2145:ITSO_SVC2:superuser>cherrstate -sequencenumber 106 -unfix

9.16.3 Setting up SNMP notification


To set up event notification, use the mksnmpserver command. Example 9-218 shows an
example of the mksnmpserver command.

Example 9-218 mksnmpserver command


IBM_2145:ITSO_SVC2:superuser>mksnmpserver -error on -warning on -info on -ip
9.43.86.160 -community SVC
SNMP Server id [0] successfully created

This command sends all events and warnings to the SVC community on the SNMP manager
with the IP address 9.43.86.160.

9.16.4 Setting the syslog event notification


You can save a syslog to a defined syslog server because the SVC supports syslog in
addition to email and SNMP traps.

The syslog protocol is a client/server standard for forwarding log messages from a sender to
a receiver on an IP network. You can use the syslog to integrate log messages from various
types of systems into a central repository. You can configure SVC to send information to six
syslog servers.

You use the mksyslogserver command to configure the SVC by using the CLI, as shown in
Example 9-219.

Example 9-219 Configuring the syslog


IBM_2145:ITSO_SVC2:superuser>mksyslogserver -ip 10.64.210.231 -name Syslogserv1
Syslog Server id [0] successfully created

The use of this command with the -h parameter gives you information about all of the
available options. In our example, we configure the SVC to use only the default values for our
syslog server.

When we configure our syslog server, we can display the current syslog server configurations
in our system, as shown in Example 9-220.

Example 9-220 lssyslogserver command


IBM_2145:ITSO_SVC2:superuser>lssyslogserver
id name IP_address facility error warning info
0 Syslogserv1 10.64.210.231 0 on on on
1 Syslogserv1 10.64.210.231 0
on on on

640 Implementing the IBM System Storage SAN Volume Controller V7.4
9.16.5 Configuring error notification by using an email server
The SVC can use an email server to send event notification and inventory emails to email
users. It can transmit any combination of events, warning, and informational notification types.
The SVC supports up to six email servers to provide redundant access to the external email
network. The SVC uses the email servers in sequence until the email is successfully sent
from the SVC.

Important: Before the SVC can start sending emails, we must run the startemail
command, which enables this service.

The attempt is successful when the SVC receives a positive acknowledgment from an email
server that the email was received by the server.

If no port is specified, port 25 is the default port, as shown in Example 9-221.

Example 9-221 The mkemailserver command syntax


IBM_2145:ITSO_SVC2:superuser>mkemailserver -ip 192.168.1.1
Email Server id [0] successfully created
IBM_2145:ITSO_SVC2:superuser>lsemailserver 0
id 0
name emailserver0
IP_address 192.168.1.1
port 25

We can configure an email user that receives email notifications from the SVC system. We
can define 12 users to receive emails from our SVC.

By using the lsemailuser command, we can verify which user is registered and what type of
information is sent to that user, as shown in Example 9-222.

Example 9-222 lsemailuser command


IBM_2145:ITSO_SVC2:superuser>lsemailuser
id name address
user_type error warning info inventory
0 IBM_Support_Center callhome0@de.ibm.com
support on off off on

We can also create a user for a SAN superuser, as shown in Example 9-223.

Example 9-223 mkemailuser command


IBM_2145:ITSO_SVC2:superuser>mkemailuser -address SANsuperuser@ibm.com -error on
-warning on -info on -inventory on
User, id [0], successfully created

9.16.6 Analyzing the event log


In this section, we describe the types of events that are logged in the event log. An event is an
occurrence of significance to a task or system. Events can include the completion or failure of
an operation, a user action, or a change in the state of a process.

Chapter 9. SAN Volume Controller operations using the command-line interface 641
Node event codes now have the following classifications:
򐂰 Critical events: Critical events put the node into the service state and prevent the node
from joining the system. The critical events are numbered 500 - 699.

Deleting a node: Deleting a node from a system causes the node to enter the service
state, as well.

򐂰 Non-critical events: Non-critical events are partial hardware faults, for example, one
power-supply unit (PSU) failed in the 2145-CF8. The non-critical events are numbered
800 - 899.

To display the event log, use the lseventlog command, as shown in Example 9-224.

Example 9-224 lseventlog command


IBM_2145:ITSO_SVC2:superuser>lseventlog -count 2
sequence_number last_timestamp object_type object_id object_name copy_id status fixed
event_id error_code description
102 111003105018 cluster ITSO_SVC2 message no
981004 FC discovery occurred, no configuration changes were detected
103 111003111036 cluster ITSO_SVC2 message no
981004 FC discovery occurred, no configuration changes were detected

IBM_2145:ITSO_SVC2:superuser>lseventlog 103
sequence_number 103
first_timestamp 111003111036
first_timestamp_epoch 1317665436
last_timestamp 111003111036
last_timestamp_epoch 1317665436
object_type cluster
object_id
object_name ITSO_SVC2
copy_id
reporting_node_id 1
reporting_node_name SVC2N1
root_sequence_number
event_count 1
status message
fixed no
auto_fixed no
notification_type informational
event_id 981004
event_id_text FC discovery occurred, no configuration changes were detected
error_code
error_code_text
sense1 01 01 00 00 7E 0B 00 00 04 02 00 00 01 00 01 00
sense2 00 00 00 00 10 00 00 00 08 00 08 00 00 00 00 00
sense3 00 00 00 00 00 00 00 00 F2 FF 01 00 00 00 00 00
sense4 0E 00 00 00 FC FF FF FF 03 00 00 00 07 00 00 00
sense5 00 00 06 00 00 00 00 00 00 00 00 00 00 00 00 00
sense6 00 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00
sense7 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

By using these commands, you can view the last events (you can specify the -count
parameter to define how many events to display) that were generated. Use the method that is
described in 9.16.2, “Running the maintenance procedures” on page 637 to upload and
analyze the event log in more detail.

642 Implementing the IBM System Storage SAN Volume Controller V7.4
To clear the event log, issue the clearerrlog command, as shown in Example 9-225.

Example 9-225 clearerrlog command


IBM_2145:ITSO_SVC2:superuser>clearerrlog
Do you really want to clear the log? y

The use of the -force flag stops any confirmation requests from appearing. When you run
this command, the command clears all of the entries from the event log. This process
proceeds even if unfixed errors are in the log. This process also clears any status events in
the log.

Important: This command is a destructive command for the event log. Use this command
only when you rebuild the system or when you fixed a major problem that caused many
entries in the event log that you do not want to fix manually.

9.16.7 License settings


To change the licensing feature settings, use the chlicense command.

Before you change the licensing, you can display your current licenses by issuing the
lslicense command, as shown in Example 9-226.

Example 9-226 lslicense command


IBM_2145:ITSO_SVC2:superuser>lslicense
used_flash 0.00
used_remote 0.03
used_virtualization 0.75
license_flash 500
license_remote 500
license_virtualization 500
license_physical_disks 0
license_physical_flash off
license_physical_remote off

The current license settings for the system are displayed in the viewing license settings log
window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror,
Global Mirror, or Virtualization features. The license settings log window also shows the
storage capacity that is licensed for virtualization. Typically, the license settings log contains
entries because feature options must be set as part of the web-based system creation
process.

For example, consider that you purchased another 5 TB of licensing for the Metro Mirror and
Global Mirror feature from your actual 20 TB license. Example 9-227 shows the command
that you enter.

Example 9-227 chlicense command


IBM_2145:ITSO_SVC2:superuser>chlicense -remote 25

To disable a feature, add 0 TB as the capacity for that feature.

To verify that the changes that you made are reflected in your SVC configuration, you can run
the lslicense command, as shown in Example 9-228 on page 644.

Chapter 9. SAN Volume Controller operations using the command-line interface 643
Example 9-228 lslicense command: Verifying changes
IBM_2145:ITSO_SVC2:superuser>lslicense
used_flash 0.00
used_remote 0.03
used_virtualization 0.75
license_flash 500
license_remote 25
license_virtualization 500
license_physical_disks 0
license_physical_flash off
license_physical_remote off

9.16.8 Listing dumps


Starting with SVC 6.3, a new command is available to list the dumps that were generated over
a specific period. You can use lsdumps with the -prefix parameter to return a list of dumps in
the appropriate directory. The command produces a list of the files in the specified directory
on the specified node. If no node is specified, the config node is used. If no -prefix is set, the
files in the /dumps directory are listed.

Error or event dump


The dumps that are contained in the /dumps/elogs directory are dumps of the contents of the
event log at the time that the dump was taken. You create an error or event log dump by using
the dumperrlog command. This command dumps the contents of the error or event log to the
/dumps/elogs directory.

If you do not supply a file name prefix, the system uses the default errlog_ file name prefix.
The full, default file name is errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is the
node front panel name. If the command is used with the -prefix option, the value that is
entered for the -prefix is used instead of errlog.

The lsdumps -prefix command lists all of the dumps in the /dumps/elogs directory, as
shown in Example 9-229.

Example 9-229 lsdumps -prefix /dumps/elogs


IBM_2145:ITSO_SVC2:superuser>lsdumps -prefix /dumps/elogs
id filename
0 errlog_110711_111003_111056
1 testerrorlog_110711_111003_135358
2 ITSO_SVC2_errlog_110711_111003_141111
3 ITSO_SVC2_errlog_110711_111003_141620
4 errlog_110711_111003_154759

Featurization log dump


The dumps that are contained in the /dumps/feature directory are dumps of the featurization
log. A featurization log dump is created by using the dumpinternallog command. This
command dumps the contents of the featurization log to the /dumps/feature directory to a file
called feature.txt. Only one of these files exists, so when the dumpinternallog command is
run, this file is overwritten.

The lsdumps -prefix /dumps/feature command lists all of the dumps in the /dumps/feature
directory, as shown in Example 9-230 on page 645.

644 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-230 lsdumps -prefix /dumps/feature command
IBM_2145:ITSO_SVC2:superuser>lsdumps -prefix /dumps/feature
id filename
0 feature.txt

I/O trace dump


Dumps that are contained in the /dumps/iotrace directory are dumps of I/O trace data. The
type of data that is traced depends on the options that are specified by the settrace
command. The collection of the I/O trace data is started by using the starttrace command.
The I/O trace data collection is stopped when the stoptrace command is used. When the
trace is stopped, the data is written to the file.

The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name
and prefix is the value that is entered by the user for the -filename parameter in the
settrace command.

The command to list all of the dumps in the /dumps/iotrace directory is lsdumps -prefix
/dumps/iotrace, as shown in Example 9-231.

Example 9-231 lsdumps -prefix /dumps/iotrace command


IBM_2145:ITSO_SVC2:superuser>lsdumps -prefix /dumps/iotrace
id iotrace_filename
0 tracedump_104643_080624_172208
1 iotrace_104643_080624_172451

I/O statistics dump


The dumps that are contained in the /dumps/iostats directory are the dumps of the I/O
statistics for the disks on the cluster. You create an I/O statistics dump by using the
startstats command. As part of this command, you can specify a time interval at which you
want the statistics to be written to the file (the default is 15 minutes). When the time interval is
encountered, the I/O statistics that are collected up to that point are written to a file in the
/dumps/iostats directory.

The file names that are used for storing I/O statistics dumps are
m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether
the statistics are for MDisks or volumes. In these file names, NNNNNN is the node front panel
name.

The command to list all of the dumps that are in the /dumps/iostats directory is lsdumps
-prefix, as shown in Example 9-232.

Example 9-232 lsdumps -prefix /dumps/iostats command


IBM_2145:ITSO_SVC2:superuser>lsdumps -prefix /dumps/iostats
id filename
0 Nm_stats_110711_111003_125706
1 Nn_stats_110711_111003_125706
2 Nv_stats_110711_111003_125706
3 Nd_stats_110711_111003_125706
4 Nv_stats_110711_111003_131204
5 Nd_stats_110711_111003_131204
6 Nn_stats_110711_111003_131204
........

Chapter 9. SAN Volume Controller operations using the command-line interface 645
Software dump
The lsdumps command lists the contents of the /dumps directory. The general debug
information, software, application dumps, and live dumps are copied into this directory.
Example 9-233 shows the command.

Example 9-233 lsdumps command without a prefix


IBM_2145:ITSO_SVC2:superuser>lsdumps
id filename
0 svc.config.cron.bak_108283
1 sel.110711.trc
2 rtc.race_mq_log.txt.110711.trc
3 ethernet.110711.trc
4 svc.config.cron.bak_110711
5 svc.config.cron.xml_110711
6 svc.config.cron.log_110711
7 svc.config.cron.sh_110711
8 svc.config.backup.bak_110711
9 svc.config.backup.tmp.xml
10 110711.trc
11 svc.config.backup.xml_110711
12 svc.config.backup.now.xml
13 snap.110711.111003.111031.tgz

Other node dumps


The lsdumps commands can accept a node identifier as input (for example, append the node
name to the end of any of the node dump commands). If this identifier is not specified, the list
of files on the current configuration node is displayed. If the node identifier is specified, the list
of files on that node is displayed.

However, files can be copied only from the current configuration node (by using PuTTY
Secure Copy). Therefore, you must run the cpdumps command to copy the files from a
non-configuration node to the current configuration node. You can then copy them to the
management workstation by using PuTTY Secure Copy.

For example, suppose that you discover a dump file and want to copy it to your management
workstation for further analysis. In this case, you must first copy the file to your current
configuration node.

To copy dumps from other nodes to the configuration node, use the cpdumps command.

In addition to the directory, you can specify a file filter. For example, if you specified
/dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in .txt are copied.

Wildcards: The following rules apply to the use of wildcards with the SVC CLI:
򐂰 The wildcard character is an asterisk (*).
򐂰 The command can contain a maximum of one wildcard.
򐂰 When you use a wildcard, you must surround the filter entry with double quotation
marks (" "), as shown in the following example:
cleardumps -prefix "/dumps/elogs/*.txt"

Example 9-234 on page 647 shows an example of the cpdumps command.

646 Implementing the IBM System Storage SAN Volume Controller V7.4
Example 9-234 cpdumps command
IBM_2145:ITSO_SVC2:superuser>cpdumps -prefix /dumps/configs n4

After you copy the configuration dump file from node n4 to your configuration node, you can
use PuTTY Secure Copy to copy the file to your management workstation for further analysis.

To clear the dumps, you can run the cleardumps command. Again, you can append the node
name if you want to clear dumps off a node other than the current configuration node (the
default for the cleardumps command).

The commands in Example 9-235 clear all logs or dumps from the SVC node SVC1N2.

Example 9-235 cleardumps commands


IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /dumps SVC1N2
IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /dumps/iostats SVC1N2
IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /dumps/iotrace SVC1N2
IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /dumps/feature SVC1N2
IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /dumps/config SVC1N2
IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /dumps/elog SVC1N2
IBM_2145:ITSO_SVC2:superuser>cleardumps -prefix /home/superuser/upgrade SVC1N2

9.17 Backing up the SAN Volume Controller system


configuration
You can back up your system configuration by using the Backing Up a Cluster Configuration
window or the CLI svcconfig command. In this section, we describe the overall procedure for
backing up your system configuration and the conditions that must be satisfied to perform a
successful backup.

The backup command extracts configuration data from the system and saves it to the
svc.config.backup.xml file in the /tmp directory. This process also produces an
svc.config.backup.sh file. You can study this file to see what other commands were issued
to extract information.

An svc.config.backup.log log is also produced. You can study this log for the details of what
was done and when it was done. This log also includes information about the other
commands that were issued.

Any pre-existing svc.config.backup.xml file is archived as the svc.config.backup.bak file.


The system keeps one archive only. We strongly suggest that you immediately move the .XML
file and related KEY files (see the following limitations) off the system for archiving. Then,
delete the files from the /tmp directory by using the svcconfig clear -all command.

We further advise that you change all of the objects that have default names to non-default
names. Otherwise, a warning is produced for objects with default names.

Also, the object with the default name is restored with its original name with an _r appended.
The prefix _(underscore) is reserved for backup and restore command usage. Do not use this
prefix in any object names.

Chapter 9. SAN Volume Controller operations using the command-line interface 647
Important: The tool backs up logical configuration data only, not client data. It does not
replace a traditional data backup and restore tool. Instead, this tool supplements a
traditional data backup and restore tool with a way to back up and restore the client’s
configuration.

To provide a complete backup and disaster recovery solution, you must back up user
(non-configuration) data and configuration (non-user) data. After the restoration of the SVC
configuration, you must fully restore user (non-configuration) data to the system’s disks.

9.17.1 Prerequisites
You must meet the following prerequisites:
򐂰 All nodes are online.
򐂰 No object name can begin with an underscore.
򐂰 All objects have non-default names, in effect, names that are not assigned by the SVC.

Although we advise that objects have non-default names at the time that the backup is taken,
this prerequisite is not mandatory. Objects with default names are renamed when they are
restored.

Example 9-236 shows an example of the svcconfig backup command.

Example 9-236 svcconfig backup command


IBM_2145:ITSO_SVC2:superuser>svcconfig backup
..................
CMMVC6130W Cluster ITSO_SVC4 with inter-cluster partnership fully_configured will
not be restored
..................................................................................
.......
CMMVC6155I SVCCONFIG processing completed successfully

As you can see in Example 9-236, we received a CMMVC6130W Cluster ITSO_SVC4 with
inter-cluster partnership fully_configured will not be restored message. This
message indicates that individual systems in a multisystem environment must be backed up
individually.

If recovery is required, run the recovery commands only on the system where the recovery is
required.

Example 9-237 shows the pscp command.

Example 9-237 pscp command


C:\Program Files\PuTTY>pscp -load ITSO_SVC2
superuser@10.18.229.81:/tmp/svc.config.backup.xml c:\temp\clibackup.xml
clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%

The following scenario shows the value of the configuration backup:


1. Use the svcconfig command to create a backup file on the clustered system that contains
details about the current system configuration.
2. Store the backup configuration on a form of tertiary storage. You must copy the backup file
from the clustered system or it becomes lost if the system crashes.

648 Implementing the IBM System Storage SAN Volume Controller V7.4
3. If a sufficiently severe failure occurs, the system might be lost. Both the configuration data
(for example, the system definitions of hosts, I/O Groups, managed disk groups (MDGs),
and MDisks) and the application data on the virtualized disks are lost.
In this scenario, it is assumed that the application data can be restored from normal client
backup procedures. However, before you can perform this restoration, you must reinstate
the system as it was configured at the time of the failure. Therefore, you restore the same
MDGs, I/O Groups, host definitions, and volumes that existed before the failure. Then, you
can copy the application data back onto these volumes and resume operations.
4. Recover the hosts, SVCs, disk controller systems, disk hardware, and SAN fabric. The
hardware and SAN fabric must physically be the same as the hardware and SAN fabric
that were used before the failure.
5. Reinitialize the clustered system with the configuration node; the other nodes are
recovered when the configuration is restored.
6. Restore your clustered system configuration by using the backup configuration file that
was generated before the failure.
7. Restore the data on your volumes by using your preferred restoration solution or with help
from IBM Support.
8. Resume normal operations.

9.18 Restoring the SAN Volume Controller clustered system


configuration

Important: Always consult IBM Support before you restore the SVC clustered system
configuration from the backup. IBM Support can assist you in analyzing the root cause of
why the system configuration was lost.

After the svcconfig restore -execute command is started, consider any prior user data on
the volumes destroyed. The user data must be recovered through your usual application data
backup and restore process.

For more information, see the V6.4.0 Command-Line Interface User’s Guide for IBM System
Storage SAN Volume Controller and Storwize V7000, GC27-2287.

For more information about the SVC configuration backup and restore functions, see V6.4.0
Software Installation and Configuration Guide for IBM System Storage SAN Volume
Controller, GC27-2286.

9.18.1 Deleting the configuration backup


In this section, we describe the tasks that you can perform to delete the configuration backup
that is stored in the configuration file directory on the system. Never clear this configuration
without having a backup of your configuration that is stored in a separate, secure place.

When the clear command is used, you erase the files in the /tmp directory. This command
does not clear the running configuration and prevent the system from working. However, the
command clears all of the configuration backup that is stored in the /tmp directory, as shown
in Example 9-238 on page 650.

Chapter 9. SAN Volume Controller operations using the command-line interface 649
Example 9-238 svcconfig clear command
IBM_2145:ITSO_SVC2:superuser>svcconfig clear -all
.
CMMVC6155I SVCCONFIG processing completed successfully

9.19 Working with the SAN Volume Controller quorum MDisks


In this section, we show how to list and change the SVC system quorum Managed Disks
(MDisks).

9.19.1 Listing the SAN Volume Controller quorum MDisks


To list SVC system quorum MDisks and view their numbers and status, run the lsquorum
command, as shown in Example 9-239.

Example 9-239 lsquorum command and detail


IBM_2145:ITSO_SVC2:superuser>lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no
1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
2 online 3 mdisk3 2 ITSO-DS3500 no mdisk no
IBM_2145:ITSO_SVC2:superuser>lsquorum 1
quorum_index 1
status online
id 0
name mdisk0
controller_id 2
controller_name ITSO-DS3500
active yes
object_type mdisk
override no

9.19.2 Changing the SAN Volume Controller quorum MDisks


To move one of your SVC quorum MDisks from one MDisk to another or from one storage
subsystem to another, use the chquorum command, as shown in Example 9-240.

Example 9-240 chquorum command


IBM_2145:ITSO_SVC2:superuser>lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 1 mdisk1 2 DS_3400 no mdisk no
1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
2 online 3 mdisk3 2 ITSO-DS3500 no mdisk no

chquorum -mdisk 9 2

IBM_2145:ITSO_SVC2:superuser>lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no

650 Implementing the IBM System Storage SAN Volume Controller V7.4
1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
2 online 9 mdisk9 3 ITSO-DS5000 no mdisk no

As you can see in Example 9-240 on page 650, the quorum index 2 was moved from MDisk3
on the ITSO-DS3500 controller to MDisk9 on the ITSO-DS5000 controller.

9.20 Working with the Service Assistant menu


You can service a node through an Ethernet connection by using a web browser or the CLI.
The web browser runs a service application that is called the Service Assistant.

9.20.1 SAN Volume Controller CLI Service Assistant menu


The following sets of commands that relate to the method for performing service tasks on the
system are available:
򐂰 By using the sainfo command set, you can query the various components within the SVC
environment.
򐂰 By using the satask command, you can change the various components within the SVC.

When the command syntax is shown, you see certain parameters in square brackets, for
example, [parameter], which indicate that the parameter is optional in most (if not all)
instances. Any information that is not in square brackets is required information. You can view
the syntax of a command by entering one of the following commands:
򐂰 sainfo -? shows a complete list of information commands.
򐂰 satask -? shows a complete list of task commands.
򐂰 sainfo commandname -? shows the syntax of information commands.
򐂰 satask commandname -? shows the syntax of task commands.

Example 9-241 shows the two sets of commands with Service Assistant.

Example 9-241 sainfo and satask commands


IBM_2145:ITSO_SVC2:superuser>sainfo -h
The following actions are available with this command :
lscmdstatus
lsfiles
lsservicenodes
lsservicerecommendation
lsservicestatus
IBM_2145:ITSO_SVC2:superuser>satask -h
The following actions are available with this command :
chenclosurevpd
chnodeled
chserviceip
chwwnn
cpfiles
installsoftware
leavecluster
mkcluster
rescuenode
setlocale
setpacedccu
settempsshkey
snap

Chapter 9. SAN Volume Controller operations using the command-line interface 651
startservice
stopnode
stopservice
t3recovery

Important: You must use the sainfo and satask command sets under the direction of IBM
Support. The incorrect use of these commands can lead to unexpected results.

9.21 SAN troubleshooting and data collection


When we encounter a SAN issue, the SVC is often extremely helpful in troubleshooting the
SAN because the SVC is at the center of the environment through which the communication
travels.

For more information about how to troubleshoot and collect data from the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Use the lsfabric command regularly to obtain a complete picture of the devices that are
connected and visible from the SVC cluster through the SAN. The lsfabric command
generates a report that displays the FC connectivity between nodes, controllers, and hosts.

Example 9-242 shows the output of an lsfabric command.

Example 9-242 lsfabric command


IBM_2145:ITSO_SVC2:superuser>lsfabric
remote_wwpn remote_nportid id node_name local_wwpn local_port local_nportid
state name cluster_name type
5005076801405034 030A00 1 SVC2N1 50050768014027E2 1 030800
active SVC1N2 ITSO_SVC2 node
5005076801405034 030A00 1 SVC2N1 50050768011027E2 3 030900
active SVC1N2 ITSO_SVC2 node
5005076801305034 040A00 1 SVC2N1 50050768013027E2 2 040800
active SVC1N2 ITSO_SVC2 node
5005076801305034 040A00 1 SVC2N1 50050768012027E2 4 040900
active SVC1N2 ITSO_SVC2 node
50050768012027E2 040900 2 SVC1N2 5005076801305034 2 040A00
active SVC2N1 ITSO_SVC2 node
50050768012027E2 040900 2 SVC1N2 5005076801205034 4 040B00
active SVC2N1 ITSO_SVC2 node
500507680120505C 040F00 1 SVC2N1 50050768013027E2 2 040800
active SVC4N2 ITSO_SVC4 node
500507680120505C 040F00 1 SVC2N1 50050768012027E2 4 040900
active SVC4N2 ITSO_SVC4 node
500507680120505C 040F00 2 SVC1N2 5005076801305034 2 040A00
active SVC4N2 ITSO_SVC4 node
500507680120505C 040F00 2 SVC1N2 5005076801205034 4 040B00
active SVC4N2 ITSO_SVC4 node
50050768013027E2 040800 2 SVC1N2 5005076801305034 2 040A00
active SVC2N1 ITSO_SVC2 node
....
Above and below rows have been removed for brevity
....

652 Implementing the IBM System Storage SAN Volume Controller V7.4
20690080E51B09E8 041900 1 SVC2N1 50050768013027E2 2 040800
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 1 SVC2N1 50050768012027E2 4 040900
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 2 SVC1N2 5005076801305034 2 040A00
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 2 SVC1N2 5005076801205034 4 040B00
inactive ITSO-DS3500 controller
50050768013037DC 041400 1 SVC2N1 50050768013027E2 2 040800
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 1 SVC2N1 50050768012027E2 4 040900
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 2 SVC1N2 5005076801305034 2 040A00
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 2 SVC1N2 5005076801205034 4 040B00
active ITSOSVC3N1 ITSO_SVC3 node
5005076801101D1C 031500 1 SVC2N1 50050768014027E2 1 030800
active ITSOSVC3N2 ITSO_SVC3 node
5005076801101D1C 031500 1 SVC2N1 50050768011027E2 3 030900
active ITSOSVC3N2 ITSO_SVC3 node
5005076801101D1C 031500 2 SVC1N2 5005076801405034 1 030A00
active ITSOSVC3N2 ITSO_SVC3 node
.....
Above and below rows have been removed for brevity
.....
5005076801201D22 021300 1 SVC2N1 50050768013027E2 2 040800
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 1 SVC2N1 50050768012027E2 4 040900
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 2 SVC1N2 5005076801305034 2 040A00
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 2 SVC1N2 5005076801205034 4 040B00
active SVC2N2 ITSO_SVC2 node
50050768011037DC 011513 1 SVC2N1 50050768014027E2 1 030800
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 1 SVC2N1 50050768011027E2 3 030900
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 2 SVC1N2 5005076801405034 1 030A00
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 2 SVC1N2 5005076801105034 3 030B00
active ITSOSVC3N1 ITSO_SVC3 node
5005076801301D22 021200 1 SVC2N1 50050768013027E2 2 040800
active SVC2N2 ITSO_SVC2 node
5005076801301D22 021200 1 SVC2N1 50050768012027E2 4 040900
active SVC2N2 ITSO_SVC2 node
....
Above and below rows have been removed for brevity
....

For more information about the lsfabric command, see the V6.4.0 Command-Line Interface
User’s Guide for IBM System Storage SAN Volume Controller and Storwize V7000,
GC27-2287.

Chapter 9. SAN Volume Controller operations using the command-line interface 653
9.22 T3 recovery process
A procedure, which is known as T3 recovery, was tested and used in select cases in which a
system was completely destroyed. One example is simultaneously pulling power cords from
all nodes to their uninterruptible power supply units. In this case, all nodes boot up to
node error 578 when the power is restored.

In certain circumstances, this procedure can recover most user data. However, it is not to be
used by the client or an IBM service support representative (SSR) without the direct
involvement from IBM Level 3 technical support. This procedure is not published, but we refer
to it here only to indicate that the loss of a system can be recoverable without total data loss.

However, this procedure requires a restoration of application data from the backup. T3
recovery is an extremely sensitive procedure that is to be used as a last resort only, and it
cannot recover any data that was destaged from cache at the time of the total system failure.

654 Implementing the IBM System Storage SAN Volume Controller V7.4
10

Chapter 10. SAN Volume Controller


operations using the GUI
In this chapter, we describe IBM SAN Volume Controller (SVC) operational management and
system administration by using the SVC graphical user interface (GUI). The SVC
management GUI is a tool that helps you to monitor, manage, and configure your system.

The information is divided into normal operations and advanced operations. We explain the
basic configuration procedures that are required to get your SVC environment running as
quickly as possible by using its GUI.

In this chapter, we focus on operational aspects.

This chapter includes the following topics:


򐂰 Normal SVC operations using GUI
򐂰 Monitoring menu
򐂰 Working with external disk controllers
򐂰 Working with storage pools
򐂰 Working with managed disks
򐂰 Migration
򐂰 Working with hosts
򐂰 Working with volumes
򐂰 Copy Services and managing FlashCopy
򐂰 Copy Services: Managing remote copy
򐂰 Managing the SAN Volume Controller clustered system by using the GUI
򐂰 Upgrading software
򐂰 Managing I/O Groups
򐂰 Managing nodes
򐂰 Troubleshooting
򐂰 User management
򐂰 Configuration
򐂰 Upgrading the SAN Volume Controller software

© Copyright IBM Corp. 2015. All rights reserved. 655


10.1 Normal SVC operations using GUI
In this section, we describe several tasks that we define as regular, day-to-day activities. For
illustration, we configured the SVC cluster in a standard topology. However, most of the tasks
that are described in this chapter are similar to a stretched cluster topology. Details about the
SVC stretched cluster are available in IBM SAN Volume Controller Enhanced Stretched
Cluster with VMware, SG24-8211.

Multiple users can be logged in to the GUI at any time. However, no locking mechanism
exists, so be aware that if two users change the same object at the same time, the last action
that is entered from the GUI is the one that takes effect.

Important: Data entries that are made through the GUI are case-sensitive.

10.1.1 Introduction to the GUI


As shown in Figure 10-1, the IBM SAN Volume Controller System GUI panel is an important
user interface. Throughout this chapter, we refer to this interface as the IBM SAN Volume
Controller System panel or the System panel.

In later sections of this chapter, we expect users to be able to navigate to this panel without
our explaining the procedure each time.

Dynamic menu with functions


Dynamic system view Expandable system overview

Three status icons with Error indicator

Figure 10-1 IBM SAN Volume Controller System panel

Dynamic menu
From any page in the SVC GUI, you always can access the dynamic menu. The SVC GUI
dynamic menu is on the left side of the SVC GUI window. To browse by using this menu,
hover the mouse pointer over the various icons and choose a page that you want to display,
as shown in Figure 10-2 on page 657.

656 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-2 The dynamic menu in the left column of the IBM SAN Volume Controller GUI

The IBM SAN Volume Controller dynamic menu consists of multiple panels. These panels
group common configuration and administration objects and present individual administrative
objects to the IBM SAN Volume Controller GUI users, as shown in Figure 10-3.

Figure 10-3 IBM SAN Volume Controller GUI panel

Suggested tasks
After a successful login, the SVC opens a pop-up window with suggested tasks, notifying
administrators that several key SVC functions are not yet configured. You cannot miss or
overlook this window. However, you can close the pop-up window and perform tasks at any
time.

Chapter 10. SAN Volume Controller operations using the GUI 657
Figure 10-4 shows the suggested tasks in the System panel.

Figure 10-4 Suggested tasks

In this case, the SVC GUI warns you that so far no volume is mapped to the host or that no
host is defined yet. You can directly perform the task from this window or cancel it and
execute the procedure later at any convenient time. Other suggested tasks that typically
appear after the initial configuration of the SVC are to create a volume and configure a
storage pool, for example.

The dynamic IBM SAN Volume Controller GUI menu contains the following panels
(Figure 10-3 on page 657):
򐂰 Monitoring
򐂰 Pools
򐂰 Volumes
򐂰 Hosts
򐂰 Copy Services
򐂰 Access
򐂰 Settings

Persistent state notification status areas


A control panel is available in the bottom part of the window. This panel is divided into three
status areas to provide information about your system. These persistent state notification
widgets are reduced, by default, as shown in Figure 10-5.

Figure 10-5 Notification indicator bars

Health status indicator


The rightmost area of the control panel provides information or alerts about internal and
external connectivity, as shown in Figure 10-6.

Figure 10-6 Health Status area

658 Implementing the IBM System Storage SAN Volume Controller V7.4
If non-critical issues exist for your system nodes, external storage controllers, or remote
partnerships, a new status area opens next to the Health Status widget, as shown in
Figure 10-7.

Figure 10-7 Controller path status alert

You can fix the error by clicking Status Alerts to direct you to the Events panel fix procedures.

If a critical system connectivity error exists, the Health Status bar turns red and alerts the
system administrator for immediate action, as shown in Figure 10-8.

Figure 10-8 External storage connectivity loss

Storage allocation indicator


The leftmost indicator shows information about the overall physical capacity (the initial
amount of storage that was allocated). This indicator also shows the virtual capacity
(thin-provisioned storage). The virtual volume size is dynamically changed as data grows or
shrinks, but you still see a fixed capacity. Click the indicator to switch between physical and
virtual capacity, as shown in Figure 10-9.

Figure 10-9 Storage allocation area

The following information is displayed in this storage allocation indicator window. To view all of
the information, you must use the up and down arrow keys:
򐂰 Allocated capacity
򐂰 Virtual capacity
򐂰 Compression ratio

Chapter 10. SAN Volume Controller operations using the GUI 659
Important: Since version 7.4, the capacity units use the binary prefixes that are defined by
the International Electrotechnical Commission (IEC). The prefixes represent a
multiplication by 1024 with symbols GiB (gibibyte), TiB (tibibyte), and PiB (pebibyte).

Running tasks indicator


The middle area provides information about the running tasks, as shown in Figure 10-10.

Figure 10-10 Long-running tasks area

The following information is displayed in this window:


򐂰 Volume migration
򐂰 MDisk removal
򐂰 Image mode migration
򐂰 Extend migration
򐂰 FlashCopy
򐂰 Metro Mirror
򐂰 Global Mirror
򐂰 Volume formatting
򐂰 Space-efficient copy repair
򐂰 Volume copy verification
򐂰 Volume copy synchronization
򐂰 Estimated time for the task completion

By clicking within the square (as shown in Figure 10-5 on page 658), this area provides
detailed information about running and recently completed tasks, as shown in Figure 10-11.

Figure 10-11 Details about running tasks

Help
Another useful interface feature is integrated help. You can access help for certain fields and
objects by hovering the mouse cursor over the question mark icon next to the field or object
(Figure 10-12 on page 661). Panel-specific help is available by clicking Need Help or by using
the Help link in the upper-right corner of the GUI.

660 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-12 Access to panel-specific help

Overview window
In SVC Version 7.4, the welcome window of the GUI changed from the well-known former
Overview panel to the new System panel, as shown in Figure 10-1 on page 656. Clicking
Overview (Figure 10-13) in the upper-right corner of the System panel opens a modified
Overview panel with options that are similar to previous versions of the software.

Figure 10-13 Opening the Overview panel

The following content of the chapter helps you to understand the structure of the panel and
how to navigate to various system components to manage them more efficiently and quickly.

10.1.2 Content view organization


The following sections describe several view options within the SVC GUI in which you can
filter (to minimize the amount of data that is shown on the window), sort, and reorganize the
content on the window.

Table filtering
On most pages, a Filter option (magnifying glass icon) is available on the upper-left side of the
window. Use this option if the list of object entries is too long.

Chapter 10. SAN Volume Controller operations using the GUI 661
Complete the following steps to use search filtering:
1. Click Filter on the upper-left side of the window, as shown in Figure 10-14, to open the
search box.

Figure 10-14 Show filter search box

2. Enter the text string that you want to filter and press Enter.
3. By using this function, you can filter your table that is based on column names. In our
example, a volume list is displayed that contains the names that include DS somewhere in
the name. DS is highlighted in amber, as shown in Figure 10-15. The search option is not
case-sensitive.

Figure 10-15 Show filtered rows

4. Remove this filtered view by clicking the reset filter icon, as shown in Figure 10-16.

Figure 10-16 Reset the filtered view

Filtering: This filtering option is available in most menu options of the GUI.

Table information
In the table view, you can add or remove the information in the tables on most pages.

For example, on the Volumes page, complete the following steps to add a column to our table:
1. Right-click any column headers of the table or select the icon in the left corner of the table
header. A list of all of the available columns appears, as shown in Figure 10-17 on
page 663.

662 Implementing the IBM System Storage SAN Volume Controller V7.4
right-click

Figure 10-17 Add or remove details in a table

2. Select the column that you want to add (or remove) from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 10-18.

Figure 10-18 Table with an added ID column

3. You can repeat this process several times to create custom tables to meet your
requirements.
4. You can always return to the default table view by selecting Restore Default View in the
column selection menu, as shown in Figure 10-19 on page 664.

Chapter 10. SAN Volume Controller operations using the GUI 663
Figure 10-19 Restore default table view

Sorting: By clicking a column, you can sort a table that is based on that column in
ascending or descending order.

Reorganizing columns in tables


You can move columns by left-clicking and moving the column right or left, as shown in
Figure 10-20. We are attempting to move the State column after the Capacity column.

Figure 10-20 Reorganizing the table columns

10.1.3 Help
To access online help, move the mouse pointer over the question mark (?) icon in the
upper-right corner of any panel and select the context-based help topic, as shown in
Figure 10-21 on page 665. Depending on the panel you are working with, the help displays its
context item.

664 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-21 Help link

By clicking Information Center, you are directed to the public IBM Knowledge Center, which
provides all of the information about the SVC systems, as shown in Figure 10-22.

Figure 10-22 SVC information in the IBM Knowledge Center

10.2 Monitoring menu


Hover the cursor over the Monitoring function icon to open the Monitoring menu (Figure 10-23
on page 666). The Monitoring menu offers these navigation directions:
򐂰 System: This option opens the general overview of your SVC system, including the
depiction of all devices in a rack and the storage capacity. For more information, see
10.2.1, “System overview” on page 666.
򐂰 Events: This option tracks all informational, warning, and error messages that occurred in
the system. You can apply various filters to sort the messages according to your needs or
export the messages to an external comma-separated values (CSV) file. For more
information, see 10.2.3, “Events” on page 670.
򐂰 Performance: This option reports the general system statistics that relate to the processor
(CPU) utilization, host and internal interfaces, volumes, and MDisks. You can switch
between MBps or IOPS. For more information, see 10.2.4, “Performance” on page 671.

Chapter 10. SAN Volume Controller operations using the GUI 665
As of V7.4, the option that was formerly called System Details is integrated into the device
overview on the general System panel, which is available after logging in or when clicking the
option System from the Monitoring menu. For more information, see “Overview window” on
page 661.

Figure 10-23 Accessing the Monitoring menu

In the following sections, we describe each option on the Monitoring menu.

10.2.1 System overview


The System option on the Monitoring menu provides a general overview about your SVC
system, including the depiction of all devices in a rack and the allocated or physical storage
capacity. When thin-provisioned volumes are enabled, the virtual capacity is also shown by
hovering your mouse over the capacity indicator. For more details, see Figure 10-24 on
page 667.

666 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-24 System overview that shows capacity

When you click a specific component of a node, a pop-up window indicates the details of the
disk drives in the unit. By right-clicking and selecting Properties, you see detailed technical
parameters, such as capacity, interface, rotation speed, and the drive status (online or offline).

See Figure 10-25 for the details of node 1 in a cluster.

1 3

right-click

2
Figure 10-25 Component details

In an environment with multiple SVC systems, you can easily direct the onsite personnel or
technician to the correct device by enabling the identification LED on the front panel. Click
Identify in the pop-up window that is shown in Figure 10-24. Then, wait for confirmation from
the technician that the device in the data center was correctly identified.

Chapter 10. SAN Volume Controller operations using the GUI 667
After the confirmation, click Turn LED Off (Figure 10-26).

Figure 10-26 Using the identification LED

Alternatively, you can use the SVC command-line interface (CLI) to get the same results.
Type the following commands in this sequence:
1. Type svctask chnode -identify yes 1 (or just type chnode -identify yes 1).
2. Type svctask chnode -identify no 1 (or just type chnode -identify no 1).

Each system that is shown in the Dynamic system view in the middle of a System panel can
be rotated by 180° to see its rear side. Click the rotation arrow in the lower-right corner of the
device, as illustrated in Figure 10-27.

Figure 10-27 Rotating the enclosure

10.2.2 System details


The System Details option was removed from the Monitoring menu; however, its modified
information is still available directly from the System panel. It provides an extended level of
the parameters and technical details that relate to the system, including the integration of
each element into an overall system configuration. Right-click the enclosure that you want
and click Properties to obtain detailed information.

668 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-28 System details

The output is shown in Figure 10-29. By using this menu, you can also power off the machine
(without an option for remote start), remove the node or enclosure from the system, or list all
of the volumes that are associated with the system, for example.

Figure 10-29 Enclosure technical details

Chapter 10. SAN Volume Controller operations using the GUI 669
In addition, from the System panel, you can see an overview of important status information
and the parameters of the Fibre Channel (FC) ports (Figure 10-30).

Figure 10-30 Canister details and vital product data (VPD)

By choosing Fibre Channel Ports, you can see the list and status of the available FC ports
with their worldwide port names (WWPNs), as shown in Figure 10-31.

Figure 10-31 Status of a node’s FC ports

10.2.3 Events
The Events option, which you select from the Monitoring menu, tracks all informational,
warning, and error messages that occur in the system. You can apply various filters to sort
them or export them to an external comma-separated values (CSV) file. A CSV file can be
created from the information that is shown here. Figure 10-32 on page 671 provides an
example of records in the SVC Event log.

670 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-32 Event log list

10.2.4 Performance
The Performance panel reports the general system statistics that relate to processor (CPU)
utilization, host and internal interfaces, volumes, and MDisks. You can switch between MBps
or IOPS or even drill down in the statistics to the node level. This capability might be useful
when you compare the performance of each node in the system if problems exist after a node
failover occurs. See Figure 10-33.

Figure 10-33 Performance statistics of the SVC

The performance statistics in the GUI show, by default, the latest five minutes of data. To see
details of each sample, click the graph and select the time stamp, as shown in Figure 10-34
on page 672.

Chapter 10. SAN Volume Controller operations using the GUI 671
Figure 10-34 Sample details

The charts that are shown in Figure 10-34 represent five minutes of the data stream. For
in-depth storage monitoring and performance statistics with historical data about your SVC
system, use the IBM Tivoli Storage Productivity Center for Disk or IBM Virtual Storage Center.

10.3 Working with external disk controllers


In this section, we describe various configuration and administrative tasks that you perform on
external disk controllers within the SVC environment.

10.3.1 Viewing the disk controller details


To view information about a back-end disk controller that is used by the SVC environment,
select Pools in the dynamic menu and then select External Storage.

The External Storage panel that is shown in Figure 10-35 opens.

Figure 10-35 Disk controller systems

For more information about a specific controller and MDisks, click the plus sign (+) that is to
the left of the controller icon and name.

10.3.2 Renaming a disk controller


After you present a new storage system to the SVC, complete the following steps to name it
so that the storage administrators can identify it more easily:
1. Right-click the newly presented controller default name. Select Rename and enter the
name that you want to associate with this storage system, as shown in Figure 10-36.

672 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-36 Renaming a storage system

2. Enter the new name that you want to assign to the controller and then click Rename, as
shown in Figure 10-37.

Figure 10-37 Changing the name of a storage system

Controller name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.

3. A task is started to change the name of this storage system. When it completes, you can
close this window. The new name of your controller is displayed on the External Storage
panel.

10.3.3 Site awareness


Starting with code version 7.2, the SVC introduces a feature that is called site awareness. This
feature is important in an SVC stretched cluster configuration where each SVC node is
installed at a different site, interconnected by FC, and acting as a clustered system across
sites.

In this stretched mode, it is mandatory to assign a site to each component in the SVC
environment (SVC nodes and storage controllers). In a normal topology, the site assignment
is not necessary, and we do not recommend that you configure it. It might affect certain
procedures, such as volume migration or FlashCopy operations.

Chapter 10. SAN Volume Controller operations using the GUI 673
10.3.4 Discovering MDisks from the external panel
You can discover MDisks from the External Storage panel. Complete the following steps to
discover new MDisks:
1. Ensure that no existing controllers are highlighted. Click Actions.
2. Click Detect MDisks to discover MDisks from this controller, as shown in Figure 10-38.

Figure 10-38 Detect MDisks action

3. When the task completes, click Close to see the newly detected MDisks.

10.4 Working with storage pools


In this section, we describe the tasks that can be performed with the storage pools.

10.4.1 Viewing storage pool information


Perform the following tasks from the Pools panel, as shown in Figure 10-39 on page 674. To
access this panel from the SVC System panel, hover the cursor over the Pools menu and
click Volumes by Pool.

Figure 10-39 Viewing the storage pools

674 Implementing the IBM System Storage SAN Volume Controller V7.4
You can add new columns to the table, as described in “Table information” on page 662.

To retrieve more information about a specific storage pool, select any storage pool in the left
column. The upper-right corner of the panel, which is shown in Figure 10-40, contains the
following information about this pool:
򐂰 Status
򐂰 Number of MDisks
򐂰 Number of volume copies
򐂰 Whether Easy Tier is active on this pool
򐂰 Site assignment
򐂰 Volume allocation
򐂰 Capacity

Figure 10-40 Detailed information about a pool

Change the view by selecting MDisks by Pools. Select the pool with which you want to work
and click the plus sign (+), which expands the information. This panel displays the MDisks
that are present in this storage pool, as shown in Figure 10-41.

Figure 10-41 MDisks that are present in a storage pool

10.4.2 Creating storage pools


Complete the following steps to create a storage pool:
1. From the SVC dynamic menu, navigate to Pools option and select MDisks by Pools.
The MDisks by Pools panel opens. Click Create Pool, as shown in Figure 10-42.

Figure 10-42 Selecting the option to create a new storage pool

The Create Storage Pools wizard opens.

Chapter 10. SAN Volume Controller operations using the GUI 675
2. In the first window of wizard, complete the following elements, as shown in Figure 10-43:
a. Specify a name for the storage pool. If you do not provide a name, the SVC
automatically generates the name mdiskgrpx, where x is the ID sequence number that
is assigned by the SVC internally.

Storage pool name: You can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The name can be 1 - 63 characters. The name is
case-sensitive. The name cannot start with a number or the pattern “MDiskgrp”
because this prefix is reserved for SVC internal assignment only.

b. Optional: Change the icon that is associated with this storage pool, as shown in
Figure 10-43.
c. In addition, you can specify the following information and then click Next:
• Extent Size under the Advanced Settings section. The default is 1 GiB.
• Warning threshold to log a message to the event log when the capacity is
exceeded. The default is 80%.

Figure 10-43 Create the storage pool (step 1 of 2)

3. In the next window (as shown in Figure 10-44), complete the following steps to specify the
MDisks that you want to associate with the new storage pool:
a. Select the MDisks that you want to add to this storage pool.

Tip: To add multiple MDisks, press and hold the Ctrl key and click selected items.

b. Click Finish to complete the creation process.

676 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-44 Create Pool window (step 2 of 2)

4. Close the task completion window. In the Storage Pools panel (as shown in Figure 10-45),
the new storage pool is displayed.

Figure 10-45 New storage pool was added successfully

All of the required tasks to create a storage pool are complete.

10.4.3 Renaming a storage pool


To rename a storage pool, complete the following steps:
1. Select the storage pool that you want to rename and then click Actions → Rename, as
shown in Figure 10-46 on page 677.

Figure 10-46 Renaming a storage pool

2. Enter the new name that you want to assign to the storage pool and click Rename, as
shown in Figure 10-47.

Chapter 10. SAN Volume Controller operations using the GUI 677
Figure 10-47 Changing the name for a storage pool

Storage pool name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.

10.4.4 Deleting a storage pool


To delete a storage pool, complete the following steps:
1. Select the storage pool that you want to delete and then click Actions → Delete Pool, as
shown in Figure 10-48. Alternatively, you can right-click directly on the pool that you want
to delete and get the same options from the menu.

Figure 10-48 Delete Pool option

2. In the Delete Pool window, click Delete to confirm that you want to delete the storage pool,
including its MDisks, as shown in Figure 10-49. If configured volumes exist within the
storage pool that you are deleting, you must unmap and delete the volumes first.

678 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-49 Deleting a pool

New in V7.4: The SVC 7.4 does not allow the user to directly delete pools that contain any
active volumes.

10.5 Working with managed disks


This section describes the various configuration and administrative tasks that you can
perform on the managed disks (MDisks) within the SVC environment.

10.5.1 MDisk information


From the SVC dynamic menu, select Pools and click MDisks by Pools. The MDisks panel
opens, as shown in Figure 10-50 on page 680. Click the plus sign (+) for one or more storage
pools to see the MDisks that belong to a certain pool.

To retrieve more information about a specific MDisk, complete the following steps:
1. From the expanded view of a storage pool in the MDisks panel, select an MDisk.
2. Click Properties, as shown in Figure 10-50 on page 680.

Chapter 10. SAN Volume Controller operations using the GUI 679
Figure 10-50 MDisks menu

3. For the selected MDisk, an overview is displayed that shows its parameters and
dependent volumes, as shown in Figure 10-51.

Figure 10-51 MDisk Overview page

4. Click the Dependent Volumes tab to display information about the volumes that are on
this MDisk, as shown in Figure 10-52. For more information about the volume panel, see
10.8, “Working with volumes” on page 699.

Figure 10-52 Dependent volumes for an MDisk

680 Implementing the IBM System Storage SAN Volume Controller V7.4
5. Click Close to return to the previous window.

10.5.2 Renaming an MDisk


Complete the following steps to rename an MDisk that is managed by the SVC clustered
system:
1. In the MDisk panel that is shown in Figure 10-50 on page 680, select the MDisk that you
want to rename.
2. Click Actions → Rename, as shown in Figure 10-53.
You can select multiple MDisks to rename by pressing and holding the Ctrl key and
selecting the MDisks that you want to rename.

Figure 10-53 Rename MDisk action

Alternative: You can right-click an MDisk directly, and select Rename.

3. In the Rename MDisk window (Figure 10-54), enter the new name that you want to assign
to the MDisk and click Rename.

Figure 10-54 Renaming an MDisk

MDisk name: The name can consist of the letters A - Z and a - z, the numbers 0 - 9,
the dash (-), and the underscore (_) character. The name can be 1 - 63 characters.

Chapter 10. SAN Volume Controller operations using the GUI 681
10.5.3 Discovering MDisks
Complete the following steps to discover newly assigned MDisks:
1. In the SVC dynamic menu, move the pointer over Pools and click MDisks by Pools.
2. Ensure that no existing storage pools are selected. Click Actions.
3. Click Detect MDisks, as shown in Figure 10-55.

Figure 10-55 Detect MDisks action

The Discover Devices window opens.


4. When the task is completed, click Close.
5. Newly assigned MDisks are displayed in the Unassigned MDisks window as Unmanaged,
as shown in Figure 10-56.

Figure 10-56 Newly discovered unmanaged disks

Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers
(LUNs) from your subsystem are correctly assigned to the SVC. Also, check that correct
zoning is in place. For example, ensure that the SVC can see the disk subsystem.

Site awareness: Do not assign sites to the SVC nodes and external storage controllers
in a standard, normal topology. Site awareness is intended primarily for SVC stretched
clusters. If any MDisks or controllers appear offline after detection, remove the site
assignment from the SVC node or controller and detect the MDisks again.

682 Implementing the IBM System Storage SAN Volume Controller V7.4
10.5.4 Assigning MDisks to a storage pool
If empty storage pools exist or you want to assign more MDisks to your pools that already
have existing MDisks, use the following steps:

Important: You can add only unmanaged MDisks to a storage pool.

1. From the MDisks by Pools panel, select the unmanaged MDisk that you want to add to a
storage pool.
2. Click Actions → Assign to Pool, as shown in Figure 10-57.

Figure 10-57 Assign an MDisk to a pool

Alternative: You can also access the Assign to Pool action by right-clicking an MDisk.

3. From the Add MDisk to Pool window, select to which pool you want to add this MDisk, and
then, click Add to Pool, as shown in Figure 10-58.

Figure 10-58 Adding an MDisk to an existing storage pool

10.5.5 Unassigning MDisks from a storage pool


To unassign an MDisk from a storage pool, complete the following steps:
1. Select the MDisk that you want to unassign from a storage pool.
2. Click Actions → Unassign from Pool, as shown in Figure 10-59 on page 684.

Chapter 10. SAN Volume Controller operations using the GUI 683
Figure 10-59 Actions: Unassign from Pool

Alternative: You can also access the Unassign from Pool action by right-clicking an
unmanaged MDisk.

3. From the Remove MDisk from Pool window (Figure 10-59), you must validate the number
of MDisks that you want to remove from this pool. This verification was added to secure
the process of deleting data.
If volumes are using the MDisks that you are removing from the storage pool, you must
select Remove the MDisk from the pool even if it has data on it. The system migrates
the data to other MDisks in the pool to confirm the removal of the MDisk.
4. Click Delete, as shown in Figure 10-60.

Figure 10-60 Unassigning an MDisk from an existing storage pool

684 Implementing the IBM System Storage SAN Volume Controller V7.4
When the migration is complete, the MDisk status changes to Unmanaged. Ensure that
the MDisk remains accessible to the system until its status becomes Unmanaged. This
process might take time. If you disconnect the MDisk before its status becomes
Unmanaged, all of the volumes in the pool go offline until the MDisk is reconnected.
An error message is displayed (as shown in Figure 10-61) if insufficient space exists to
migrate the volume data to other extents on other MDisks in that storage pool.

Figure 10-61 Unassign MDisk error message

10.6 Migration
For more information about data migration, see Chapter 6, “Data migration” on page 241.

10.7 Working with hosts


In this section, we describe the various configuration and administrative tasks that you can
perform on the host object that is connected to your SVC.

Host configuration: For more information about connecting hosts to the SVC in a SAN
environment, see Chapter 5, “Host configuration” on page 169.

A host system is a computer that is connected to the SVC through an FC interface, Fibre
Channel over Ethernet (FCoE), or an Internet Protocol (IP) network.

A host object is a logical object in the SVC that represents a list of worldwide port names
(WWPNs) and a list of Internet Small Computer System Interface (iSCSI) names that identify
the interfaces that the host system uses to communicate with the SVC. iSCSI names can be
iSCSI-qualified names (IQN) or extended unique identifiers (EUI).

A typical configuration has one host object for each host system that is attached to the SVC. If
a cluster of hosts accesses the same storage, you can add host bus adapter (HBA) ports from
several hosts to one host object to simplify a configuration. A host object can have both
WWPNs and iSCSI names.

Chapter 10. SAN Volume Controller operations using the GUI 685
The following methods can be used to visualize and manage your SVC host objects from the
SVC GUI Hosts menu selection:
򐂰 Use the Hosts panel, as shown in Figure 10-62.

Figure 10-62 Hosts panel

򐂰 Use the Ports by Host panel, as shown in Figure 10-63.

Figure 10-63 Ports by Host panel

򐂰 Use the Host Mapping panel, as shown in Figure 10-64.

Figure 10-64 Host Mapping panel

򐂰 Use the Volumes by Hosts panel, as shown in Figure 10-65 on page 687.

686 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-65 Volumes by Hosts panel

Important: Several actions on the hosts are specific to the Ports by Host panel or the Host
Mapping panel, but all of these actions and others are accessible from the Hosts panel. For
this reason, all actions on hosts are run from the All Hosts panel.

10.7.1 Host information


To access the Hosts panel from the IBM SAN Volume Controller System panel that is shown
in Figure 10-1 on page 656, move the mouse pointer over the Hosts selection of the dynamic
menu and click Hosts.

You can add information (new columns) to the table in the Hosts panel, as shown in
Figure 10-17 on page 663. For more information, see “Table information” on page 662.

To retrieve more information about a specific host, complete the following steps:
1. In the table, select a host.
2. Click Actions → Properties, as shown in Figure 10-66.

Figure 10-66 Actions: Host properties

Alternative: You can also access the Properties action by right-clicking a host.

3. You are presented with information for a host in the Overview window, as shown in
Figure 10-67 on page 688.

Chapter 10. SAN Volume Controller operations using the GUI 687
Figure 10-67 Host details: Overview

Show Details option: To obtain more information about the hosts, select Show Details
(Figure 10-67).

4. On the Mapped Volumes tab (Figure 10-68), you can see the volumes that are mapped to
this host.

Figure 10-68 Host details: Mapped volumes

5. The Port Definitions tab, as shown in Figure 10-69 on page 689, displays attachment
information, such as the WWPNs that are defined for this host or the iSCSI IQN that is
defined for this host.

688 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-69 Host details: Port Definitions tab

When you finish viewing the details, click Close to return to the previous window.

10.7.2 Adding a host


Two types of connections to hosts are available: Fibre Channel (FC and FCoE) and iSCSI. In
this section, we describe the following types of connection methods:
򐂰 For FC hosts, see the steps in “Fibre Channel-attached hosts”.
򐂰 For iSCSI hosts, see the steps in “iSCSI-attached hosts” on page 692.

Note: The FCoE hosts are listed under the FC Hosts Add Menu in the SVC GUI. Click
Fire Channel Hosts to access the FCoE host options. (See Figure 10-71 on page 690.)

Fibre Channel-attached hosts


To create a host that uses the FC connection type, complete the following steps:
1. Go to the Hosts panel from the SVC System panel that is shown in Figure 10-1 on
page 656, hover cursor over the Hosts selection and click Hosts, as shown in
Figure 10-70.
2. Click Add Host, as shown in Figure 10-70.

Figure 10-70 Create Host action

3. Select Fibre Channel Host from the two available connection types, as shown in
Figure 10-71 on page 690.

Chapter 10. SAN Volume Controller operations using the GUI 689
Figure 10-71 Create a Fibre Channel host

4. In the Add Host window (Figure 10-72 on page 691), enter a name for your host in the
Host Name field.

Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
provide a name, use letters A - Z and a - z, numbers 0 - 9, or the underscore (_)
character. The host name can be 1 - 63 characters.

5. In the Fibre Channel Ports section, use the drop-down list box to select the WWPNs that
correspond to your HBA or HBAs and then click Add Port to List. Repeat this step to add
more ports.

Deleting an FC port: If you added the wrong FC port, you can delete it from the list by
clicking the red X.

If your WWPNs do not display, click Rescan to rediscover the available WWPNs that are
new since the last scan.

WWPN still not displayed: In certain cases, your WWPNs still might not display, even
though you are sure that your adapter is functioning (for example, you see the WWPN
in the switch name server) and your SAN zones are set up correctly. To correct this
situation, enter the WWPN of your HBA or HBAs into the drop-down list box and click
Add Port to List. It is displayed as unverified.

6. If you need to modify the I/O Group or Host Type, you must select Advanced in the
Advanced Settings section to access these Advanced Settings, as shown in Figure 10-72
on page 691. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these hosts, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that use MPxIO.

690 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-72 Creating a Fibre Channel host

7. Click Add Host, as shown in Figure 10-72. After you return to the Hosts panel
(Figure 10-73 on page 692), you can see the newly added FC host.

Chapter 10. SAN Volume Controller operations using the GUI 691
Figure 10-73 New Fibre Channel host

iSCSI-attached hosts
To create a host that uses the iSCSI connection type, complete the following steps:
1. To go to the Hosts panel from the SVC System panel on Figure 10-1 on page 656, move
the mouse pointer over the Hosts selection and click Hosts.
2. Click Add Host, as shown in Figure 10-70 on page 689, and select iSCSI Host.

692 Implementing the IBM System Storage SAN Volume Controller V7.4
3. In the Add Host window (as shown in Figure 10-74), enter a name for your host in the Host
Name field.

Figure 10-74 Adding an iSCSI host

Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
want to provide a name, you can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The host name can be 1 - 63 characters.

4. In the iSCSI ports section, enter the iSCSI initiator or IQN as an iSCSI port and then click
Add Port to List. This IQN is obtained from the server and generally has the same
purpose as the WWPN. Repeat this step to add more ports.

Deleting an iSCSI port: If you add the wrong iSCSI port, you can delete it from the list
by clicking the red X.

If needed, select Use CHAP authentication (all ports), as shown in Figure 10-74, and
enter the Challenge Handshake Authentication Protocol (CHAP) secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system’s properties or for each host definition. The CHAP must be identical on
the server and the system or host definition. You can create an iSCSI host definition
without the use of a CHAP.
5. If you need to modify the I/O Group or Host Type, you must select the Advanced option in
the Advanced Settings section to access these settings, as shown in Figure 10-72 on
page 691. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these types, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that are using
MPxIO.

Chapter 10. SAN Volume Controller operations using the GUI 693
10.7.3 Renaming a host
Complete the following steps to rename a host:
1. In the table, select the host that you want to rename.
2. Click Actions → Rename, as shown in Figure 10-75.

Figure 10-75 Rename action

Alternatives: Two other methods can be used to rename a host. You can right-click a
host and select Rename, or you can use the method that is described in Figure 10-76
on page 694.

3. In the Rename Host window, enter the new name that you want to assign and click
Rename, as shown in Figure 10-76.

Figure 10-76 Renaming a host

10.7.4 Deleting a host


To delete a host, complete the following steps:
1. In the table, select the host or hosts that you want to delete and click Actions → Remove,
as shown in Figure 10-77 on page 695.

694 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-77 Remove action

Alternative: You can also right-click a host and select Remove.

2. The Remove Host window opens, as shown in Figure 10-78. In the Verify the number of
hosts that you are deleting field, enter the number of hosts that you want to remove. This
verification was added to help you avoid deleting the wrong hosts inadvertently.
If volumes are still associated with the host and if you are sure that you want to delete the
host even though these volumes will be no longer accessible, select Remove the host
even if volumes are mapped to them. These volumes will no longer be accessible to
the hosts.
3. Click Delete to complete the process, as shown in Figure 10-78.

Figure 10-78 Deleting a host

10.7.5 Creating or modifying a host mapping


To modify the host mapping, complete the following steps:
1. In the table, select the host.
2. Click Actions → Modify Mappings, as shown in Figure 10-79 on page 696.

Chapter 10. SAN Volume Controller operations using the GUI 695
Tip: You can also right-click a host and select Modify Mappings.

Figure 10-79 Modify Mappings action

3. In the Modify Host Mappings window (Figure 10-80), select the volume or volumes that
you want to map to this host and move each volume to the table on the right by clicking the
two greater than symbols (>>). If you must remove the volumes, select the volume and
click the two less than symbols (<<).

Figure 10-80 Modify host mappings window: Adding volumes to a host

4. In the table on the right, you can edit the SCSI ID by selecting a mapping that is
highlighted in yellow, which indicates a new mapping. Click Edit SCSI ID (Figure 10-81 on
page 697).

696 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-81 Changing the SCSI ID

When you attempt to map a volume that is already mapped to another host, a warning
pop-up window appears, prompting for confirmation (Figure 10-82). Volumes that are
mapped to multiple hosts are wanted in clustered or fault-tolerant systems, for example.

Figure 10-82 Volume is already mapped to another host

Changing a SCSI ID: You can change the SCSI ID only on new mappings. To edit a
mapping SCSI ID, you must unmap the volume and re-create the map to the volume.

5. In the Edit SCSI ID window, change SCSI ID and then click OK, as shown in Figure 10-83
on page 698.

Chapter 10. SAN Volume Controller operations using the GUI 697
Figure 10-83 Modify host mappings window: Edit SCSI ID

6. After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships.

10.7.6 Deleting a host mapping


To delete a host mapping, complete the following steps:
1. In the table, select the host for which you want to delete a host mapping.
2. Click Actions → Modify Mappings, as shown in Figure 10-84.

Tip: You can also right-click a host and select Modify Mappings.

3. Select the host mapping or mappings that you want to remove.


4. Click the two less than symbols (<<) in the middle of the window after you select the
volumes that you want to remove. Then, click Apply or Map Volumes to complete the
Modify Host Mapping task.

10.7.7 Deleting all host mappings


To delete all host mappings for a host, perform the following steps:
1. Select the host and click Actions → Unmap All volumes, as shown in Figure 10-84.

Figure 10-84 Unmap All Volumes option

698 Implementing the IBM System Storage SAN Volume Controller V7.4
2. From the Unmap from Host window (Figure 10-85), enter the number of mappings that you
want to remove in the “Verify the number of mappings that this operation affects” field. This
verification helps you to avoid deleting the wrong hosts unintentionally.

Figure 10-85 Unmap from Host window

3. Click Unmap to remove the host mapping or mappings. You are returned to the Hosts
panel.

10.8 Working with volumes


In this section, we describe the tasks that you can perform at a volume level.

You can visualize and manage your volumes by using the following methods:
򐂰 Use the Volumes panel, as shown in Figure 10-86.

Figure 10-86 Volumes panel

Chapter 10. SAN Volume Controller operations using the GUI 699
򐂰 Or use the Volumes by Pool panel, as shown in Figure 10-87.

Figure 10-87 Volumes by Pool panel

򐂰 Alternatively, use the Volumes by Host panel, as shown in Figure 10-88.

Figure 10-88 Volumes by Host panel

Important: Several actions on the hosts are specific to the Volumes by Pool panel or to the
Volumes by Host panel. However, all of these actions and others are accessible from the
Volumes panel. We run all of the actions in the following sections from the Volumes panel.

10.8.1 Volume information


To access the Volumes panel from the IBM SAN Volume Controller System panel that is
shown in Figure 10-1 on page 656, move the mouse pointer over the Volumes selection and
click Volumes, as shown in Figure 10-86 on page 699.

You can add information (new columns) to the table in the Volumes panel, as shown in
Figure 10-17 on page 663. For more information, see “Table information” on page 662.

700 Implementing the IBM System Storage SAN Volume Controller V7.4
To retrieve more information about a specific volume, complete the following steps:
1. In the table, select a volume and click Actions → Properties, as shown in Figure 10-89.

Figure 10-89 Volume Properties action

Tip: You can also access the Properties action by right-clicking a volume name.

The Overview tab shows information about a volume, as shown in Figure 10-90.

Figure 10-90 Volume properties: Overview tab

The Host Maps tab (Figure 10-91 on page 702) displays the hosts that are mapped with
this volume.

Chapter 10. SAN Volume Controller operations using the GUI 701
Figure 10-91 Volume properties: Volume is mapped to this host

The Member MDisks tab (Figure 10-92) displays the MDisks that are used for this volume.
You can perform actions on the MDisks, such as removing them from a pool, adding them
to a tier, renaming them, showing their dependent volumes, or displaying their properties.

Figure 10-92 Volume properties: Member MDisks

2. When you finish viewing the details, click Close to return to the Volumes panel.

10.8.2 Creating a volume


To create one of more volumes, complete the following steps:
1. Go to the IBM SAN Volume Controller System panel that is shown in Figure 10-1 on
page 656, move the mouse pointer over the Volumes selection and click Volumes.
2. Click Create Volumes, as shown in Figure 10-93.

Figure 10-93 Create Volumes action

3. Select one of the following presets, as shown in Figure 10-94 on page 703:
– Generic: Create volumes that use a fully allocated (thick) amount of capacity from the
selected storage pool.
– Thin Provision: Create volumes whose capacity is virtual (seen by the host), but that
use only the real capacity that is written by the host application. The virtual capacity of
a thin-provisioned volume often is larger than its real capacity.
– Mirror: Create volumes with two physical copies that provide data protection. Each
copy can belong to a separate storage pool to protect data from storage failures.
– Thin Mirror: Create volumes with two physical copies to protect data from failures while
using only the real capacity that is written by the host application.
– Compressed: Create volumes whose data is compressed while it is written to disk,
which saves more space.

702 Implementing the IBM System Storage SAN Volume Controller V7.4
Changing the preset: For our example, we chose the Generic preset. However,
whatever selected preset you choose, you can reconsider your decision later by
customizing the volume when you click the Advanced option.

4. After selecting a preset (in our example, Generic), you must select the storage pool on
which the data is striped, as shown in Figure 10-94.

Figure 10-94 Creating a volume: Select preset and the storage pool

5. After you select the storage pool, the window is updated automatically. You must enter the
following information, as shown in Figure 10-95 on page 704:
– Enter a volume quantity. You can create multiple volumes at the same time by using an
automatic sequential numbering suffix.
– Enter a name if you want to create a single volume, or a naming prefix if you want to
create multiple volumes.

Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The host name can be 1 - 63 characters.

– Enter the size of the volume that you want to create and select the capacity unit of
measurement (bytes, KiB, MiB, GiB, or TiB) from the list.

Tip: An entry of 1 GiB uses 1,024 MiB.

An updated summary automatically appears in the bottom of the window to show the
amount of space that is used and the amount of free space that remains in the pool.

Chapter 10. SAN Volume Controller operations using the GUI 703
Figure 10-95 Create Volume: Volume details

The following optional actions are available from this window:


– Modify the storage pool by clicking Edit. In this case, select another storage pool.
– You can create more volumes by clicking the arrows in the Quantity field.

Naming: When you create more than one volume, the wizard does not prompt you
for a name for each volume that is created. Instead, the name that you use here
becomes the prefix and a number (starting at zero) is appended to this prefix as
each volume is created. You can modify a starting suffix to any numeric value that
you prefer (whole non-negative numbers). Modifying the ending value increases or
decreases the volume quantity that is based on the whole number count.

6. You can activate and customize advanced features, such as thin-provisioning or mirroring,
depending on the preset that you selected. To access these settings, click Advanced.
On the Characteristics tab (Figure 10-96 on page 705), you can set the following options:
– General: Format the new volume by selecting Format Before Use. (Formatting writes
zeros to the volume before it can be used; it writes zeros to its MDisk extents.)
– Locality: Choose a caching I/O Group and then select a preferred node. You can leave
the default values for SVC auto-balance. After you select a caching I/O Group, you also
can add more I/O Groups as Accessible I/O Groups.
– OpenVMS only: Enter the user-defined identifier (UDID) for OpenVMS. You must
complete only this field for the OpenVMS system.

UDID: Each OpenVMS Fibre Channel-attached volume requires a user-defined


identifier or unit device identifier (UDID). A UDID is a non-negative integer that is
used when an OpenVMS device name is created. To recognize volumes, OpenVMS
issues a UDID value, which is a unique numerical number.

704 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-96 Create Volume: Advanced settings and Characteristics

On the Capacity Management tab (Figure 10-97), you can set the following options after
you activate thin provisioning by selecting Thin-Provisioned:
– Real Capacity: Enter the real size that you want to allocate. This size is the percentage
of the virtual capacity or a specific number in GiB of the disk space that is allocated.
– Automatically Expand: Select to allow the real disk size to grow, as required.
– Warning Threshold: Enter a percentage of the virtual volume capacity for a threshold
warning. A warning message is generated when the used disk capacity on the
space-efficient copy first exceeds the specified threshold.
– Thin-Provisioned Grain Size: Select the grain size: 32 KiB, 64 KiB, 128 KiB, or
256 KiB. Smaller grain sizes save space. Larger grain sizes produce better
performance. Try to match the FlashCopy grain size if the volume is used for
FlashCopy.

Figure 10-97 Create Volume: Advanced settings

On the Capacity Management tab, after you activate compression by selecting


Compressed, you create a compressed volume by using the Real-time Compression
feature (as shown in Figure 10-98 on page 706). As with thin-provisioned volumes,
compressed volumes have virtual, real, and used capacities.

Important: Compressed and uncompressed volumes must not be mixed within the
same pool.

Chapter 10. SAN Volume Controller operations using the GUI 705
Figure 10-98 Create Volume: Advanced settings

For more information about the Real-time Compression feature, see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859, and
Implementing IBM Real-time Compression in SAN Volume Controller and IBM Storwize
V7000, TIPS1083.
On the Mirroring tab (Figure 10-99), you can set the Mirror Sync Rate option after you
activate mirroring by selecting Create Mirrored Copy. Enter the Mirror Sync Rate, which
is the I/O governing rate, by using a percentage that determines how quickly copies are
synchronized. A zero value disables synchronization.

Important: When you activate this feature from the Advanced menu, you must select a
secondary pool in the main window (Figure 10-95 on page 704). The primary pool is
used as the primary and preferred copy for read operations. The secondary pool is
used as the secondary copy.

Figure 10-99 Create Volume: Advanced settings for mirroring

7. After you set all of the advanced settings, click OK to return to the main menu
(Figure 10-95 on page 704).
8. You can choose to create only the volumes by clicking Create, or you can create and map
the volumes by selecting Create and Map to Host.
If you select to create only the volumes, you are returned to the Volumes panel. You see
that your volumes were created but not mapped, as shown in Figure 10-100. You can map
them later.

Figure 10-100 Volumes that are created without mapping

706 Implementing the IBM System Storage SAN Volume Controller V7.4
If you want to create and map the volumes on the volume creation window, click Continue
after the task finishes and another window opens. In the Modify Host Mappings window,
select the I/O Group and host to which you want to map these volumes by using the
drop-down menu (as shown Figure 10-101) and you are automatically directed to the host
mapping table.

Figure 10-101 Modify Host Mappings: Select the host to which to map your volumes

In the Modify Host Mappings window, verify the mapping. If you want to modify the
mapping, select the volume or volumes that you want to map to a host and move each of
them to the table on the right by clicking the two greater than symbols (>>), as shown in
Figure 10-102. If you must remove the mappings, click the two less than symbols (<<).

Figure 10-102 Modify Host Mappings: Host mapping table

After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships and finalize the creation of the volumes.
You return to the main Volumes panel. You can see that your volumes were created and
mapped, as shown in Figure 10-103.

Figure 10-103 Volumes are created and mapped to the host

Chapter 10. SAN Volume Controller operations using the GUI 707
10.8.3 Renaming a volume
Complete the following steps to rename a volume:
1. In the table, select the volume that you want to rename.
2. Click Actions → Rename, as shown in Figure 10-104.

Figure 10-104 Selecting the Rename option

Tip: Two other ways are available to rename a volume. You can right-click a volume and
select Rename, or you can use the method that is described in Figure 10.8.4 on
page 708.

3. In the Rename Volume window, enter the new name that you want to assign to the volume
and click Rename, as shown in Figure 10-105.

Figure 10-105 Renaming a volume

Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The volume name can be 1 - 63 characters.

10.8.4 Modifying a volume


To modify a volume, complete the following steps:
1. In the table, select the volume that you want to modify.
2. Click Actions → Properties, as shown in Figure 10-106 on page 709.

708 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-106 Properties action

Tip: You can also right-click a volume and select Properties.

3. In the Overview tab, click Edit to modify the parameters for this volume, as shown in
Figure 10-107 on page 710.
From this window, you can modify the following parameters:
– Volume Name: For the volume name, You can use the letters A - Z and a - z, the
numbers 0 - 9, and the underscore (_) character. The host name can be 1 - 63
characters.
– Accessible I/O Group: You can select another I/O Group from the list to add an
additional I/O Group to the existing I/O Group for this volume.
– Mirror Sync Rate: Change the Mirror Sync rate, which is the I/O governing rate, by
using a percentage that determines how quickly copies are synchronized. A zero value
disables synchronization.
– Cache Mode: Change the caching policy of a volume. Caching policy can be set to
Enabled (read/write caching enabled), Disabled (no caching enabled), or Read Only
(only read caching enabled).
– OpenVMS: Enter the UDID (OpenVMS). This field must be completed for an OpenVMS
system only.

UDID: Each OpenVMS Fibre Channel-attached volume requires a user-defined


identifier or unit device identifier (UDID). A UDID is a non-negative integer that is
used when an OpenVMS device name is created. To recognize volumes, OpenVMS
issues a UDID value, which is a unique numerical number.

Chapter 10. SAN Volume Controller operations using the GUI 709
Figure 10-107 Modify a volume

4. Click Save to save the changes.


5. Click Close to close the Volume Details window.

10.8.5 Modifying thin-provisioned or compressed volume properties

Note: In the following sections, we demonstrate GUI operations by using a thin-provisioned


volume as an example. However, the same actions apply to a compressed volume preset.

In addition to the properties that you can modify by following the instructions in Figure 10.8.4
on page 708, other properties are specific to thin-provisioned or compressed volumes that
you can modify by completing the following steps:
1. Depending on whether the volume is non-mirrored or mirrored, complete one of the
following steps:
– For a non-mirrored volume: Select the volume by clicking Actions → Volume Copy
Actions → Thin-Provisioned (or Compressed) → Edit Properties, as shown in
Figure 10-108 on page 711.

710 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-108 Non-mirrored volume: Thin-Provisioned Edit Properties

Tip: You can also right-click the volume and select Volume Copy Actions →
Thin-Provisioned (or Compressed) → Edit Properties.

– For a mirrored volume: Select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify.
Click Actions, and then click Thin-Provisioned (or Compressed) → Edit Properties,
as shown in Figure 10-109.

Figure 10-109 Mirrored volume: Thin-Provisioned Edit Properties action

Tip: You can also right-click the thin-provisioned copy and select
Thin-Provisioned → Edit Properties.

Chapter 10. SAN Volume Controller operations using the GUI 711
2. The Edit Properties - volumename (Copy #), (where volumename is the volume that you
selected in the previous step) window opens, as shown in Figure 10-110. In this window,
you can modify the following volume characteristics:
– Warning Threshold: Enter a percentage. This function generates a warning when the
used disk capacity on the thin-provisioned or compressed copy first exceeds the
specified threshold.
– Enable Autoexpand: Autoexpand allows the real disk size to grow, as required,
automatically.

Figure 10-110 Edit the thin-provisioned properties window

GUI: You also can modify the real size of your thin-provisioned or compressed volume by
using the GUI, depending on your needs. For more information, see 10.8.11, “Shrinking
the real capacity of a thin-provisioned or compressed volume” on page 721, or 10.8.12,
“Expanding the real capacity of a thin-provisioned or compressed volume” on page 723.

10.8.6 Deleting a volume


To delete a volume, complete the following steps:
1. In the table, select the volume or volumes that you want to delete.
2. Click Actions → Delete, as shown in Figure 10-111 on page 713.

712 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-111 Delete a volume action

Alternative: You can also right-click a volume and select Delete.

3. The Delete Volume window opens, as shown in Figure 10-112. In the “Verify the number of
volumes that you are deleting” field, enter a value for the number of volumes that you want
to remove. This verification helps you to avoid deleting the wrong volumes.

Important: Deleting a volume is a destructive action for any user data on that volume.
A volume cannot be deleted if the SVC records any I/O activity on the volume during the
defined past time interval.

If you still have a volume that is associated with a host that is used with FlashCopy or
remote copy, and you want to delete the volume, select Delete the volume even if it has
host mappings or is used in FlashCopy mappings or remote-copy relationships.
Then, click Delete, as shown in Figure 10-112.

Figure 10-112 Delete Volume

Chapter 10. SAN Volume Controller operations using the GUI 713
Note: You also can delete a mirror copy of a mirrored volume. For information about
deleting a mirrored copy, see 10.8.15, “Deleting a mirrored copy from a volume mirror”
on page 729.

10.8.7 Deleting a host mapping

Important: Before you delete a host mapping, ensure that the host is no longer using the
volume. Unmapping a volume from a host does not destroy the volume contents.
Unmapping a volume has the same effect as powering off the computer without first
performing a clean shutdown; therefore, the data on the volume might end up in an
inconsistent state. Also, any running application that was using the disk receives I/O errors
and might not recover until a forced application or server reboot.

To delete a host mapping to a volume, complete the following steps:


1. In the table, select the volume for which you want to delete a host mapping.
2. Click Actions → Properties, as shown in Figure 10-113.

Figure 10-113 Volume Properties

Tip: You can also right-click a volume and select Properties.

3. In the Properties window, click the Host Maps tab, as shown in Figure 10-114 on
page 715.

714 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-114 Volume Details: Host Maps tab

Alternative: You also can access this window by selecting the volume in the table and
clicking View Mapped Hosts in the Actions menu, as shown in Figure 10-115 on
page 715.

Figure 10-115 View Mapped Hosts

4. Select the host mapping or mappings that you want to remove.


5. Click Unmap from Host, as shown in Figure 10-114.
6. In the “Verify the number of hosts that this action affects” field of the Unmap Host window
(Figure 10-116 on page 716), enter a value for the number of host objects that you want to
remove. This verification helps you to avoid deleting the wrong host objects.

Chapter 10. SAN Volume Controller operations using the GUI 715
Figure 10-116 Volume Details: Unmap Host

7. Click Unmap to remove the host mapping or mappings. You are returned to the Host Maps
window. Click Refresh to verify the results of the unmapping action, as shown in
Figure 10-117.

Figure 10-117 Volume Details: Volume unmapping verification

8. Click Close to return to the Volumes panel.

10.8.8 Deleting all host mappings for a volume


To delete all host mappings for a volume, complete the following steps:
1. In the table, select the volume for which you want to delete all host mappings.
2. Click Actions → Unmap All Hosts, as shown in Figure 10-118 on page 717.

716 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-118 Unmap All Hosts from Actions menu

Tip: You can also right-click a volume and select Unmap All Hosts.

3. In the “Verify the number of mappings that this operation affects” field in the Unmap from
Hosts window (Figure 10-119), enter the number of host objects that you want to remove.
This verification helps you to avoid deleting the wrong host objects.

Figure 10-119 Unmap from Hosts window

4. Click Unmap to remove the host mapping or mappings. You are returned to the All
Volumes panel.

10.8.9 Shrinking a volume


The SVC shrinks a volume by removing the required number of extents from the end of the
volume. Depending on where the data is on the volume, this action can destroy data. For
example, you might have a volume that consists of 128 extents (0 - 127) of 16 MiB (2 GiB
capacity), and you want to decrease the capacity to 64 extents (1 GiB capacity). In this
example, the SVC removes extents 64 - 127. Depending on the operating system, no easy
way exists to ensure that your data is placed entirely on extents 0 - 63, so be aware that you
might lose data.

Although shrinking a volume is an easy task by using the SVC, ensure that your operating
system supports shrinking (natively or by using third-party tools) before you use this function.

Chapter 10. SAN Volume Controller operations using the GUI 717
In addition, the preferred practice is to always have a consistent backup before you attempt to
shrink a volume.

Important: For thin-provisioned or compressed volumes, the use of this method to shrink a
volume results in shrinking its virtual capacity. For more information about shrinking its real
capacity, see 10.8.11, “Shrinking the real capacity of a thin-provisioned or compressed
volume” on page 721.

Shrinking a volume is useful under the following circumstances:


򐂰 Reducing the size of a candidate target volume of a copy relationship to make it the same
size as the source
򐂰 Releasing space from volumes to have free extents in the storage pool, if you no longer
use that space and take precautions with the remaining data

Assuming that your operating system supports it, perform the following steps to shrink a
volume:
1. Perform any necessary steps on your host to ensure that you are not using the space that
you are about to remove.
2. In the volume table, select the volume that you want to shrink. Click Actions → Shrink, as
shown in Figure 10-120.

Figure 10-120 Shrink volume action

Tip: You can also right-click a volume and select Shrink.

3. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-121 on page 719.
You can enter how much you want to shrink the volume by using the Shrink by field or you
can directly enter the final size that you want to use for the volume by using the Final size
field. The other field is computed automatically. For example, if you have a 10 GiB volume
and you want it to become 6 GiB, you can specify 4 GiB in the Shrink by field or you can
directly specify 6 GiB in the Final size field, as shown in Figure 10-121 on page 719.

718 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-121 Shrinking a volume

4. When you are finished, click Shrink and the changes are visible on your host.

10.8.10 Expanding a volume


Expanding a volume presents a larger capacity disk to your operating system. Although you
can expand a volume easily by using the SVC, you must ensure that your operating system is
prepared for it and supports the volume expansion before you use this function.

Dynamic expansion of a volume is supported only when the volume is in use by one of the
following operating systems:
򐂰 AIX 5L V5.2 and higher
򐂰 Windows Server 2008, and Windows Server 2012 for basic and dynamic disks
򐂰 Windows Server 2003 for basic disks, and Windows Server 2003 with Microsoft hot fix
(Q327020) for dynamic disks

Important: For thin-provisioned volumes, the use of this method results in expanding its
virtual capacity. For more information about expanding its real capacity, see 10.8.12,
“Expanding the real capacity of a thin-provisioned or compressed volume” on page 723.

If your operating system supports expanding a volume, complete the following steps:
1. In the table, select the volume that you want to expand.
2. Click Actions → Expand, as shown in Figure 10-122 on page 720.

Chapter 10. SAN Volume Controller operations using the GUI 719
Figure 10-122 Expand volume action

Tip: You can also right-click a volume and select Expand.

3. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-123 on page 721.
You can enter how much you want to enlarge the volume by using the Expand by field, or
you can directly enter the final size that you want to use for the volume by using the Final
size field. The other field is computed automatically.
For example, if you have a 10 GiB volume and you want it to become 15 GiB, you can
specify 5 GiB in the Expand by field or you can directly specify 15 GiB in the Final size
field, as shown in Figure 10-123 on page 721. The maximum final size shows 42 GiB for
the volume.

720 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-123 Expanding a volume

Volume expansion: The following considerations are important:


򐂰 Expanding image-mode volumes is not supported.
򐂰 If you use volume mirroring, all copies must be synchronized before you expand
them.
򐂰 If an insufficient number of extents exist to expand your volume to the specified size,
you receive an error message.

4. When you are finished, click Expand, as shown in Figure 10-123.

10.8.11 Shrinking the real capacity of a thin-provisioned or compressed


volume
From a host’s perspective, the virtual capacity shrinkage (10.8.9, “Shrinking a volume” on
page 717) of a volume affects the host access. For more information about determining these
effects, see 10.8.9, “Shrinking a volume” on page 717. The real capacity shrinkage of a
volume, which is described in this section, is not apparent to the hosts.

Note: In the following sections, we demonstrate real capacity operations by using a


thin-provisioned volume as an example. However, the same actions apply to a compressed
volume preset.

To shrink the real capacity of a thin-provisioned or compressed volume, complete the


following steps:
1. Depending on the case, use one of the following actions:
– For a non-mirrored volume, select the thin-provisioned or compressed volume and click
Actions → Volume Copy Actions → Thin-Provisioned (or Compressed) → Shrink,
as shown in Figure 10-124 on page 722.

Chapter 10. SAN Volume Controller operations using the GUI 721
Figure 10-124 Non-mirrored volume: Thin-Provisioned Shrink action

Tip: You can also right-click the volume and select Volume Copy Actions →
Thin-Provisioned (or Compressed) → Shrink.

– For a mirrored volume, select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify and click Actions → Thin-Provisioned (or
Compressed) → Shrink, as shown in Figure 10-125.

Figure 10-125 Mirrored volume: Thin-Provisioned Shrink action

Tip: You can also right-click the thin-provisioned or compressed mirrored copy and
select Thin-Provisioned (or Compressed) → Shrink.

2. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-126 on page 723.
You can enter the amount by which you want to shrink the volume by using the Shrink by
field, or you can enter the final real capacity directly that you want to use for the volume by
using the Final real capacity field. The other field is computed automatically.
For example, if you have a current real capacity equal to 323.2 MiB and you want a final
real size that is equal to 200 MiB, you can specify 123.2 MiB in the Shrink by field, or you
can directly specify 200 MiB in the Final real capacity field, as shown in Figure 10-126 on
page 723.

3. When you are finished, click Shrink, as shown in Figure 10-126 on page 723, and the
changes become visible to your host.

722 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-126 Shrink Volume real capacity window

10.8.12 Expanding the real capacity of a thin-provisioned or compressed


volume
From a host perspective, the virtual capacity expansion of a volume affects the host access.
For more information about these effects, see 10.8.10, “Expanding a volume” on page 719.
The real capacity expansion of a volume is not apparent to the hosts.

To expand the real size of a thin-provisioned or compressed volume, complete the following
steps:
1. Depending on the case, use one of the following actions:
– For a non-mirrored volume, select the thin-provisioned or compressed volume and click
Actions → Volume Copy Actions → Thin-Provisioned (or Compressed) →
Expand, as shown in Figure 10-127 on page 724.

Chapter 10. SAN Volume Controller operations using the GUI 723
Figure 10-127 Non-mirrored volume: Compressed Expand action

Tip: You can also right-click the volume and select Volume Copy Actions →
Thin-Provisioned (or Compressed) → Expand.

– For a mirrored volume, select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify, and click Actions → Thin-Provisioned (or
Compressed) → Expand, as shown in Figure 10-128.

Figure 10-128 Mirrored volume: Thin-Provisioned Expand action

Tip: You can also right-click the thin-provisioned or compressed copy and select
Thin-Provisioned (or Compressed) → Expand.

2. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-129 on page 725.
You can enter the amount by which you want to expand the volume by using the Expand
by field, or you can enter the final real capacity size directly that you want to use for the
volume by using the Final real capacity field. The other field is computed automatically.
For example, if you have a current real capacity equal to 200 MiB and you want a final real
size equal to 700 MiB, you can specify 500 MiB in the Expand by field or you can directly
specify 700 MiB in the Final real capacity field, as shown in Figure 10-129 on page 725.

3. When you are finished, click Expand, as shown in Figure 10-129 on page 725, and the
changes become visible on your host.

724 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-129 Expand Volume real capacity window

10.8.13 Migrating a volume


To migrate a volume, complete the following steps:
1. In the table, select the volume that you want to migrate.
2. Click Actions → Migrate to Another Pool, as shown in Figure 10-130.

Figure 10-130 Migrate to Another Pool action

Tip: You can also right-click a volume and select Migrate to Another Pool.

3. The Migrate Volume Copy window opens, as shown in Figure 10-131 on page 726. Select
the storage pool to which you want to reassign the volume. You are presented with a list of
only the storage pools with the same extent size.
4. When you finish making your selections, click Migrate to begin the migration process.

Chapter 10. SAN Volume Controller operations using the GUI 725
Figure 10-131 Migrate Volume Copy window

Important: After a migration starts, it cannot be stopped. The migration process


continues until it is complete unless the process is stopped or suspended by an error
condition or the volume that is being migrated is deleted.

5. You can check the progress of the migration by using the Running Tasks status area, as
shown in Figure 10-132.

Figure 10-132 Running Tasks

To expand this area, click the icon, and then click Migration. Figure 10-133 shows a
detailed view of the running tasks.

Figure 10-133 Long Running Task: Volume migration

When the migration is finished, the volume is part of the new pool.

726 Implementing the IBM System Storage SAN Volume Controller V7.4
10.8.14 Adding a mirrored copy to an existing volume
You can add a mirrored copy to an existing volume, which provides two copies of the
underlying disk extents.

Tip: You can also create a mirrored volume by selecting the Mirror or Thin Mirror preset
during the volume creation, as shown in Figure 10-94 on page 703.

You can use a volume mirror for any operation for which you can use a volume. It is not
apparent to higher-level operations, such as Metro Mirror, Global Mirror, or FlashCopy.

Creating a volume mirror from an existing volume is not restricted to the same storage pool;
therefore, this method is ideal to use to protect your data from a disk system or an array
failure. If one copy of the mirror fails, it provides continuous data access to the other copy.
When the failed copy is repaired, the copies automatically resynchronize.

You can also use a volume mirror as an alternative migration tool, where you can synchronize
the mirror before splitting off the original side of the mirror. The volume stays online, and it can
be used normally while the data is being synchronized. The copies can also be separate
structures: striped, image, sequential, or space-efficient, and separate extent sizes.

To create a mirror copy from within a volumes panel, complete the following steps:
1. In the table, select the volume to which you want to add a mirrored copy.
2. Click Actions → Volume Copy Actions → Add Mirrored Copy, as shown in
Figure 10-134.

Figure 10-134 Add Mirrored Copy actions

Tip: You can also right-click a volume and select Volume Copy Actions → Add
Mirrored Copy.

Chapter 10. SAN Volume Controller operations using the GUI 727
3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-135. You can perform the
following steps separately or in combination:
a. Select the storage pool in which you want to put the copy. To maintain higher
availability, choose a separate group.
b. For the Volume type, select Thin-Provisioned to make the copy space-efficient.
The following parameters are used for this thin-provisioned copy:
• Real Size: 2% of Virtual Capacity
• Enable Autoexpand: Active
• Warning Threshold: 80% of Virtual Capacity
• Thin-Provisioned Grain Size: 256 KB

Changing options: You can change only Real Size, Enable Autoexpand, and
Warning Threshold after the thin-provisioned volume copy is added.

For more information about modifying the real size of your thin-provisioned volume,
see 10.8.11, “Shrinking the real capacity of a thin-provisioned or compressed
volume” on page 721, and 10.8.12, “Expanding the real capacity of a
thin-provisioned or compressed volume” on page 723.

4. Click Add Copy.

Figure 10-135 Add copy to volume window

728 Implementing the IBM System Storage SAN Volume Controller V7.4
5. You can check the migration by using the Running Tasks menu, as shown in
Figure 10-136. To expand this Status Area, click the icon and click Volume
Synchronization.

Figure 10-136 Running Tasks: Volume Synchronization

6. When the synchronization is finished, the volume is part of the new pool, as shown in
Figure 10-137.

Figure 10-137 Mirrored volume

Primary copy: As shown in Figure 10-137, the primary copy is identified with an
asterisk (*). In this example, Copy 0 is the primary version (copy) and Copy 1* is the
copy.

10.8.15 Deleting a mirrored copy from a volume mirror


To remove a volume copy, complete the following steps:
1. In the table, select the volume copy that you want to remove. Click Actions → Delete this
Copy, as shown in Figure 10-138 on page 730.

Chapter 10. SAN Volume Controller operations using the GUI 729
Figure 10-138 Delete this Copy action

Tip: You can also right-click a volume and select Delete this Copy.

2. The Warning window opens, as shown in Figure 10-139. Click Yes to confirm.

Figure 10-139 Warning window

If the volume that you intend to delete is a primary copy and the secondary copy is not yet
synchronized, the attempt fails and you must wait until the synchronization completes.

10.8.16 Splitting a volume copy


To split off a synchronized volume copy to a new volume, complete the following steps:
1. In the table, select the volume copy that you want to split and click Actions → Split into
New Volume, as shown in Figure 10-140.

Figure 10-140 Split into New Volume action

Tip: You can also right-click a volume and select Split into New Volume.

2. The Split Volume Copy window opens, as shown in Figure 10-141 on page 731. In this
window, enter a name for the new volume.

730 Implementing the IBM System Storage SAN Volume Controller V7.4
Volume name: If you do not provide a name, the SVC automatically generates the
name vdiskx (where x is the ID sequence number that is assigned by the SVC
internally). If you want to provide a name, you can use the letters A - Z and a - z, the
numbers 0 - 9, and the underscore (_) character. The host name can be 1 - 63
characters.

3. Click Split Volume Copy, as shown in Figure 10-141.

Figure 10-141 Split Volume Copy window

This new volume is now available to be mapped to a host.

Important: After you split a volume mirror, you cannot resynchronize or recombine them.
You must create a new volume copy.

10.8.17 Validating volume copies


To validate the copies of a mirrored volume, complete the following steps:
1. In the table, select a copy of this volume. Click Actions → Validate Volume Copies, as
shown in Figure 10-142.

Figure 10-142 Validate Volume Copies actions

Chapter 10. SAN Volume Controller operations using the GUI 731
2. The Validate Volume Copies window opens, as shown in Figure 10-143. In this window,
select one of the following options:
– Generate Event of Differences: Use this option if you want to verify only that the
mirrored volume copies are identical. If a difference is found, the command stops and
logs an error that includes the logical block address (LBA) and the length of the first
difference. Starting at a separate LBA each time, you can use this option to count the
number of differences on a volume.
– Overwrite Differences: Use this option to overwrite the content from the primary volume
copy to the other volume copy. The command corrects any differing sectors by copying
the sectors from the primary copy to the copies that are compared. Upon completion,
the command process logs an event, which indicates the number of differences that
were corrected. Use this option if you are sure that the primary volume copy data is
correct or that your host applications can handle incorrect data.
– Return Media Error to Host: Use this option to convert sectors on all volume copies that
contain different contents into virtual medium errors. Upon completion, the command
logs an event, which indicates the number of differences that were found, the number
of differences that were converted into medium errors, and the number of differences
that were not converted. Use this option if you are unsure what data is correct, and you
do not want an incorrect version of the data to be used.

Figure 10-143 Validate Volume Copies window

3. Click Validate. The volume is now verified.

10.8.18 Migrating to a thin-provisioned volume by using volume mirroring


To migrate to a thin-provisioned volume, complete the following steps:
1. In the table, select the volume to migrate to a thin-provisioned volume.
2. Click Actions → Volume Copy Actions → Add Mirrored Copy, as shown in
Figure 10-144 on page 733.

732 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-144 Add Mirrored Copy actions

Tip: You can also right-click a volume and select Volume Copy Actions → Add
Mirrored Copy.

3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 10-145 on page 734. You can
perform the following steps separately or in combination:
a. Select the storage pool in which you want to put the copy. To maintain higher
availability, choose a separate group.
b. For the Volume Type, select Thin-Provisioned to make the copy space-efficient.
The following parameters are used for this thin-provisioned copy:
• Real Size: 2% of Virtual Capacity
• Autoexpand: Active
• Warning Threshold: 80% of Virtual Capacity
• Thin-Provisioned Grain Size: 256 KB

Changing options: You can change Real Size, Autoexpand, and Warning
Threshold after the volume copy is added in the GUI. To change Thin-Provisioned
Grain Size, you must use the CLI.

4. Click Add Copy.

Chapter 10. SAN Volume Controller operations using the GUI 733
Figure 10-145 Add Volume Copy window

5. You can check the migration by using the Running Tasks status area menu, as shown in
Figure 10-132 on page 726.
To expand this status area, click the icon and click Volume Synchronization.
Figure 10-146 shows the detailed view of the running tasks.

Figure 10-146 Running Tasks status area: Volume Synchronization

Mirror Sync Rate: You can change the Mirror Sync Rate (the default is 50%) by
modifying the volume properties. For more information, see Figure 10.8.4 on page 708.

6. When the synchronization is finished, in the table, select the original, non-thin-provisioned
copy that you want to remove. Select Actions → Delete this Copy, as shown in
Figure 10-147.

Figure 10-147 Delete this Copy window

734 Implementing the IBM System Storage SAN Volume Controller V7.4
Tip: You can also right-click a volume and select Delete this Copy.

7. The Warning window opens, as shown in Figure 10-148. Click Yes to confirm your choice.

Figure 10-148 Warning window

Tip: If you try to remove the primary copy before it is completely synchronized with the
other copy, you receive the following message:

The command failed because the copy specified is the only synchronized copy.

You must wait until the end of the synchronization to remove the primary copy.

When the copy is deleted, your thin-provisioned volume is ready for use and automatically
mapped to the original host.

10.8.19 Creating a volume in image mode


For more information about the required steps to create a volume in image mode, see
Chapter 6, “Data migration” on page 241.

10.8.20 Migrating a volume to an image mode volume


For more information about the required steps to migrate a volume to an image mode volume,
see Chapter 6, “Data migration” on page 241.

10.8.21 Creating an image mode mirrored volume


For more information about the required steps to create an image mode mirrored volume, see
Chapter 6, “Data migration” on page 241.

10.9 Copy Services and managing FlashCopy


It is often easier to work with the FlashCopy function from the GUI if you have a reasonable
number of host mappings. However, in enterprise data centers with many host mappings, we
suggest that you use the CLI to run your FlashCopy commands.

Chapter 10. SAN Volume Controller operations using the GUI 735
Copy Services: For more information about the functionality of Copy Services in the SVC
environment, see Chapter 8, “Advanced Copy Services” on page 405.

In this section, we describe the tasks that you can perform at a FlashCopy level. The following
methods can be used to visualize and manage your FlashCopy:
򐂰 Use the SVC Overview panel. Move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy, as shown in Figure 10-149.

Figure 10-149 FlashCopy panel

In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a
target volume. Any data that existed on the target volume is lost and that data is replaced
by the copied data.
򐂰 Use the Consistency Groups panel, as shown in Figure 10-150. A Consistency Group is a
container for mappings. You can add many mappings to a Consistency Group.

Figure 10-150 Consistency Groups panel

736 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Use the FlashCopy Mappings panel, as shown in Figure 10-151. A FlashCopy mapping
defines the relationship between a source volume and a target volume.

Figure 10-151 FlashCopy Mappings panel

10.9.1 Creating a FlashCopy mapping


In this section, we create FlashCopy mappings for volumes and their targets.

Complete the following steps:


1. From the SVC Overview panel, move the mouse pointer over Copy Services in the
dynamic menu and click FlashCopy. The FlashCopy panel opens, as shown in
Figure 10-152.

Figure 10-152 FlashCopy panel

2. Select the volume for which you want to create the FlashCopy relationship, as shown in
Figure 10-153 on page 738.

Multiple FlashCopy mappings: To create multiple FlashCopy mappings at one time,


select multiple volumes by holding down Ctrl and clicking the entries that you want.

Chapter 10. SAN Volume Controller operations using the GUI 737
Figure 10-153 FlashCopy mapping: Select the volume (or volumes)

Depending on whether you created the target volumes for your FlashCopy mappings or you
want the SVC to create the target volumes for you, the following options are available:
򐂰 If you created the target volumes, see “Using existing target volumes” on page 738.
򐂰 If you want the SVC to create the target volumes for you, see “Creating target volumes” on
page 742.

Using existing target volumes


Complete the following steps to use existing target volumes for the FlashCopy mappings:
1. Select the target volume that you want to use. Then, click Actions → Advanced
FlashCopy → Use Existing Target Volumes, as shown in Figure 10-154.

Figure 10-154 Using existing target volumes

2. The Create FlashCopy Mapping window opens (Figure 10-155 on page 739). In this
window, you must create the relationship between the source volume (the disk that is
copied) and the target volume (the disk that receives the copy). A mapping can be created
between any two volumes inside an SVC clustered system. Select a source volume and a
target volume for your FlashCopy mapping, and then click Add. If you must create other
copies, repeat this step.

Important: The source volume and the target volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source volume.

738 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-155 Create a FlashCopy Mapping by using an existing target volume

To remove a relationship that was created, click , as shown in Figure 10-156.

Volumes: The volumes do not have to be in the same I/O Group or storage pool.

3. Click Next after you create all of the relationships that you need, as shown in
Figure 10-156.

Figure 10-156 Create FlashCopy Mapping window

4. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 10-157 on page 740:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates a replica of the source volume on a target volume. The copy can be
changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
source and target volumes.

Chapter 10. SAN Volume Controller operations using the GUI 739
Figure 10-157 Create FlashCopy Mapping window

For each preset, you can customize various advanced options. You can access these
settings by clicking Advanced Settings.
5. The advanced setting options are shown in Figure 10-158.

Figure 10-158 Create FlashCopy Mapping Advanced Settings

If you prefer not to customize these settings, go directly to step 6 on page 741.
You can customize the following advanced setting options, as shown in Figure 10-158:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.

740 Implementing the IBM System Storage SAN Volume Controller V7.4
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.

Incremental FlashCopy mapping: Even if the type of the FlashCopy mapping is


incremental, the first copy process copies all of the data from the source volume to
the target volume.

– Delete mapping after completion: This option automatically deletes a FlashCopy


mapping after the background copy is completed. Do not use this option when the
background copy rate is set to zero.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.
After you complete your modifications, click Next.
6. You can choose whether to add the mappings to a Consistency Group or not.
If you want to include this FlashCopy mapping in a Consistency Group, select Yes, add
the mappings to a consistency group in the window that is shown in Figure 10-159. You
also can select the Consistency Group from the drop-down list box.

Figure 10-159 Add the mappings to a Consistency Group

Or, if you do not want to include this FlashCopy mapping in a Consistency Group, select
No, do not add the mappings to a consistency group.
Click Finish, as shown in Figure 10-160.

Figure 10-160 Do not add the mappings to a Consistency Group

Chapter 10. SAN Volume Controller operations using the GUI 741
7. Check the result of this FlashCopy mapping. For each FlashCopy mapping relationship
that was created, a mapping name is automatically generated that starts with fcmapX,
where X is the next available number. If needed, you can rename these mappings, as
shown in Figure 10-161. For more information, see 10.9.11, “Renaming a FlashCopy
mapping” on page 759.

Figure 10-161 FlashCopy Mapping

The FlashCopy mapping is now ready for use.

Creating target volumes


Complete the following steps to create target volumes for FlashCopy mapping:
1. If you did not create a target volume for this source volume, click Actions → Advanced
FlashCopy → Create New Target Volumes, as shown in Figure 10-162.

Target volume naming: If the target volume does not exist, the target volume is
created. The target volume name is based on its source volume and a generated
number at the end, for example, source_volume_name_XX, where XX is a number that
was generated dynamically.

Figure 10-162 Selecting Create New Target Volumes

2. In the Create FlashCopy Mapping window (Figure 10-163 on page 743), you must select
one FlashCopy preset. The GUI provides the following presets to simplify common
FlashCopy operations:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.

742 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-163 Create FlashCopy Mapping window

For each preset, you can customize various advanced options. To access these settings,
click Advanced Settings. The Advanced Setting options show in Figure 10-164.

Figure 10-164 Create FlashCopy Mapping Advanced Settings

If you prefer not to customize these advanced settings, go directly to step 3 on page 744.
You can customize the advanced setting options that are shown in Figure 10-164:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.

Chapter 10. SAN Volume Controller operations using the GUI 743
Incremental FlashCopy mapping: Even if the type of the FlashCopy mapping is
incremental, the first copy process copies all of the data from the source volume to
the target volume.

– Delete mapping after completion: This option automatically deletes a FlashCopy


mapping after the background copy is completed. Do not use this option when the
background copy rate is set to zero.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.
3. You can choose whether to add this FlashCopy mapping to a Consistency Group or not.
If you want to include this FlashCopy mapping in a Consistency Group, select Yes, add
the mappings to a consistency group in the next window (Figure 10-165). Select the
Consistency Group from the drop-down list.
If you do not want to include this FlashCopy mapping in a Consistency Group, select No,
do not add the mappings to a consistency group.
Click Finish.

Figure 10-165 Selecting the option to add the mappings to a Consistency Group

4. The next window shows the volume capacity management dialog. Choose one of the four
options based on the capacity preset that you want to use with your target volume
(Figure 10-166). Here, you can decide whether the target volume manages capacity in a
generic, thin-provisioned, or compressed manner, or whether the target volume inherits its
capacity properties from the source volume.

Figure 10-166 Create FlashCopy mapping: Capacity management

744 Implementing the IBM System Storage SAN Volume Controller V7.4
If you select thin-provisioning as the method to manage the capacity of your target volume,
you can set up the following parameters (as shown in Figure 10-167):
– Real Capacity: Enter the real size that you want to allocate. This size is the amount of
disk space that is allocated, which can be a percentage of the virtual size or a specific
number in GBs.
– Automatically Expand: Select auto expand and the real disk size can grow, as required.
– Warning Threshold: Enter a percentage or select a specific size for the usage threshold
warning.

Figure 10-167 Create FlashCopy mapping capacity management for the thin-provisioning preset

Similarly, if you want to use the compression preset, you can configure the real capacity,
auto-expand, and warning threshold setting on the target volume, as shown in
Figure 10-168.

Figure 10-168 Create FlashCopy mapping capacity management for the compression preset

Chapter 10. SAN Volume Controller operations using the GUI 745
5. In the next window (Figure 10-169), select the storage pool that is used to automatically
create targets. You can choose to use the same storage pool that is used by the source
volume. Or, you can select a storage pool from a list. Click Finish.

Figure 10-169 Select the storage pool to create new targets

6. Check the result of this FlashCopy mapping, as shown in Figure 10-170. For each
FlashCopy mapping relationship that is created, a mapping name is automatically
generated that starts with fcmapX where X is the next available number. If necessary, you
can rename these mappings, as shown in Figure 10-170. For more information, see
10.9.11, “Renaming a FlashCopy mapping” on page 759.

Figure 10-170 FlashCopy mapping

The FlashCopy mapping is ready for use.

Tip: You can start FlashCopy from the SVC GUI. However, the use of the SVC GUI might
be impractical if you plan to handle many FlashCopy mappings or Consistency Groups
periodically, or at varying times. In these cases, creating a script by using the CLI might be
more convenient.

746 Implementing the IBM System Storage SAN Volume Controller V7.4
10.9.2 Single-click snapshot
The snapshot creates a point-in-time backup of production data. The snapshot is not intended
to be an independent copy. Instead, it is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.
Snapshot uses the following preset parameters:
򐂰 Background copy: No
򐂰 Incremental: No
򐂰 Delete after completion: No
򐂰 Cleaning rate: No
򐂰 Primary copy source pool: Target pool

To create and start a snapshot, complete the following steps:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy.
2. Select the volume that you want to create a snapshot of and click Actions → Create
Snapshot, as shown in Figure 10-171.

Figure 10-171 Create Snapshot option

3. A volume is created as a target volume for this snapshot in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column Status area, as shown in
Figure 10-172.

Figure 10-172 Snapshot created and started

Chapter 10. SAN Volume Controller operations using the GUI 747
10.9.3 Single-click clone
The clone preset creates an exact replica of the volume, which can be changed without
affecting the original volume. After the copy completes, the mapping that was created by the
preset is automatically deleted.

The clone preset uses the following parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: No
򐂰 Delete after completion: Yes
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

To create and start a clone, complete the following steps:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy.
2. Select the volume that you want to clone.
3. Click Actions → Create Clone, as shown in Figure 10-173.

Figure 10-173 Create Clone option

4. A volume is created as a target volume for this clone in the same pool as the source
volume. The FlashCopy mapping is created and started. You can check the FlashCopy
progress in the Progress column or in the Running Tasks Status column. After the
FlashCopy clone is created, the mapping is removed and the new cloned volume becomes
available, as shown in Figure 10-174.

Figure 10-174 Clone created and FlashCopy relationship removed

748 Implementing the IBM System Storage SAN Volume Controller V7.4
10.9.4 Single-click backup
The backup creates a point-in-time replica of the production data. After the copy completes,
the backup view can be refreshed from the production data, with minimal copying of data from
the production volume to the backup volume.

The backup preset uses the following parameters:


򐂰 Background Copy rate: 50
򐂰 Incremental: Yes
򐂰 Delete after completion: No
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

To create and start a backup, complete the following steps:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy.
2. Select the volume that you want to back up.
3. Click Actions → Create Backup, as shown in Figure 10-175.

Figure 10-175 Create Backup option

4. A volume is created as a target volume for this backup in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column, as shown in
Figure 10-176, or in the Running Tasks Status column.

Figure 10-176 Backup created and started

Chapter 10. SAN Volume Controller operations using the GUI 749
10.9.5 Creating a FlashCopy Consistency Group
To create a FlashCopy Consistency Group in the SVC GUI, complete the following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups. The Consistency Groups panel opens, as
shown in Figure 10-177.

Figure 10-177 Consistency Groups panel

2. Click Create Consistency Group and enter the FlashCopy Consistency Group name that
you want to use and click Create (Figure 10-178).

Figure 10-178 Create Consistency Group window

Consistency Group name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The volume name can be 1 - 63 characters.

Figure 10-179 on page 751 shows the result.

750 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-179 New Consistency Group

10.9.6 Creating FlashCopy mappings in a Consistency Group


In this section, we describe how to create FlashCopy mappings for volumes and their related
targets. The source and target volumes were created before this operation.

Complete the following steps:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups. The Consistency Groups panel opens, as
shown in Figure 10-179.
2. Select in which Consistency Group (Figure 10-180) you want to create the FlashCopy
mapping. If you prefer not to create a FlashCopy mapping in a Consistency Group, select
Not in a Group.

Figure 10-180 Consistency Group selection

3. If you select a new Consistency Group, click Actions → Create FlashCopy Mapping, as
shown in Figure 10-181.

Figure 10-181 Create FlashCopy Mapping action for a Consistency Group

Chapter 10. SAN Volume Controller operations using the GUI 751
4. If you did not select a Consistency Group, click Create FlashCopy Mapping, as shown in
Figure 10-182.

Consistency Groups: If no Consistency Group is defined, the mapping is a


stand-alone mapping. It can be prepared and started without affecting other mappings.
All mappings in the same Consistency Group must have the same status to maintain
the consistency of the group.

Figure 10-182 Create FlashCopy Mapping

5. The Create FlashCopy Mapping window opens, as shown in Figure 10-183. In this
window, you must create the relationships between the source volumes (the volumes that
are copied) and the target volumes (the volumes that receive the copy). A mapping can be
created between any two volumes in a clustered system.

Important: The source volume and the target volume must be of equal size.

Figure 10-183 Create FlashCopy Mapping window

Tip: The volumes do not have to be in the same I/O Group or storage pool.

6. Select a volume in the Source Volume column by using the drop-down list. Then, select a
volume in the Target Volume column by using the drop-down list. Click Add, as shown in
Figure 10-183. Repeat this step to create other relationships.
To remove a relationship that was created, click .

Important: The source and target volumes must be of equal size. Therefore, only the
targets with the appropriate size are shown for a source volume.

752 Implementing the IBM System Storage SAN Volume Controller V7.4
7. Click Next after all of the relationships that you want to create are shown (Figure 10-184).

Figure 10-184 Create FlashCopy Mapping with the relationships that were created

8. In the next window, you must select one FlashCopy preset. The GUI provides the following
presets to simplify common FlashCopy operations, as shown in Figure 10-185:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.

Figure 10-185 Create FlashCopy Mapping window

Whichever preset you select, you can customize various advanced options. To access
these settings, click Advanced Settings.
If you prefer not to customize these settings, go directly to step 9.

Chapter 10. SAN Volume Controller operations using the GUI 753
You can customize the following advanced setting options, as shown in Figure 10-186:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.

Incremental copies: Even if the type of the FlashCopy mapping is incremental, the
first copy process copies all of the data from the source volume to the target volume.

– Delete mapping after completion: This option automatically deletes a FlashCopy


mapping after the background copy is completed. Do not use this option when the
background copy rate is set to zero.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.

Figure 10-186 Create FlashCopy Mapping: Advanced Settings

9. If you do not want to create these FlashCopy mappings from a Consistency Group (step 3
on page 751), you must confirm your choice by selecting No, do not add the mappings
to a consistency group, as shown in Figure 10-187 on page 755.

754 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-187 Do not add the mappings to a Consistency Group

10.Click Finish.
11.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown
in Figure 10-188.
For each FlashCopy mapping relationship that you created, a mapping name is
automatically generated that starts with fcmapX where X is an available number. If
necessary, you can rename these mappings. For more information, see 10.9.11,
“Renaming a FlashCopy mapping” on page 759.

Figure 10-188 Create FlashCopy mappings result

Tip: You can start FlashCopy from the SVC GUI. However, if you plan to handle many
FlashCopy mappings or Consistency Groups periodically, or at varying times, creating a
script by using the operating system shell CLI might be more convenient.

10.9.7 Showing related volumes


Complete the following steps to show related volumes for a specific FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. Select the volume (from the FlashCopy panel only) or the FlashCopy mapping that you
want to view in this Consistency Group.
3. Click Actions → Show Related Volumes, as shown in Figure 10-189 on page 756.

Tip: You can also right-click a FlashCopy mapping and select Show Related Volumes.

Chapter 10. SAN Volume Controller operations using the GUI 755
Figure 10-189 Show Related Volumes

In the Related Volumes window (Figure 10-190), you can see the related mapping for a
volume. If you click one of these volumes, you can see its properties. For more information
about volume properties, see 10.8.1, “Volume information” on page 700.

Figure 10-190 Related Volumes

10.9.8 Moving a FlashCopy mapping to a Consistency Group


Complete the following steps to move a FlashCopy mapping to a Consistency Group:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. Select the FlashCopy mapping that you want to move to a Consistency Group or the
FlashCopy mapping for which you want to change the Consistency Group.
3. Click Actions → Move to Consistency Group, as shown in Figure 10-191 on page 757.

Tip: You can also right-click a FlashCopy mapping and select Move to Consistency
Group.

756 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-191 Move to Consistency Group action

4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for this FlashCopy mapping by using the drop-down list (Figure 10-192).

Figure 10-192 Move FlashCopy mapping to Consistency Group window

5. Click Move to Consistency Group to confirm your changes.

10.9.9 Removing a FlashCopy mapping from a Consistency Group


Complete the following steps to remove a FlashCopy mapping from a Consistency Group:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. Select the FlashCopy mapping that you want to remove from a Consistency Group.
3. Click Actions → Remove from Consistency Group, as shown in Figure 10-193.

Tip: You can also right-click a FlashCopy mapping and select Remove from
Consistency Group.

Figure 10-193 Remove from Consistency Group action

Chapter 10. SAN Volume Controller operations using the GUI 757
In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 10-194.

Figure 10-194 Remove FlashCopy Mapping from Consistency Group

10.9.10 Modifying a FlashCopy mapping


Complete the following steps to modify a FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. In the table, select the FlashCopy mapping that you want to modify.
3. Click Actions → Edit Properties, as shown in Figure 10-195.

Figure 10-195 Edit Properties

Tip: You can also right-click a FlashCopy mapping and select Edit Properties.

4. In the Edit FlashCopy Mapping window, you can modify the following parameters for a
selected FlashCopy mapping, as shown in Figure 10-196 on page 759:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.

758 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-196 Edit FlashCopy Mapping

5. Click Save to confirm your changes.

10.9.11 Renaming a FlashCopy mapping


Complete the following steps to rename a FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups or FlashCopy Mappings.
2. In the table, select the FlashCopy mapping that you want to rename.
3. Click Actions → Rename Mapping, as shown in Figure 10-197.

Tip: You can also right-click a FlashCopy mapping and select Rename Mapping.

Figure 10-197 Rename Mapping action

4. In the Rename FlashCopy Mapping window, enter the new name that you want to assign
to the FlashCopy mapping and click Rename, as shown in Figure 10-198 on page 760.

Chapter 10. SAN Volume Controller operations using the GUI 759
Figure 10-198 Renaming a FlashCopy mapping

FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.

10.9.12 Renaming a Consistency Group


To rename a Consistency Group, complete the following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups.
2. From the left panel, select the Consistency Group that you want to rename. Then, select
Actions → Rename, as shown in Figure 10-199.

Figure 10-199 Renaming a Consistency Group

3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 10-200 on page 761.

760 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-200 Changing the name for a Consistency Group

Consistency Group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.

The new Consistency Group name is displayed in the Consistency Group panel.

10.9.13 Deleting a FlashCopy mapping


Complete the following steps to delete a FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings icon.
2. In the table, select the FlashCopy mapping that you want to delete.

Selecting multiple FlashCopy mappings: To select multiple FlashCopy mappings,


hold down Ctrl and click the other entries that you want to delete. This capability is only
available in the Consistency Groups panel and the FlashCopy Mappings panel.

3. Click Actions → Delete Mapping, as shown in Figure 10-201.

Tip: You can also right-click a FlashCopy mapping and select Delete Mapping.

Figure 10-201 Selecting the Delete Mapping option

Chapter 10. SAN Volume Controller operations using the GUI 761
4. The Delete FlashCopy Mapping window opens, as shown in Figure 10-202. In the “Verify
the number of FlashCopy mappings that you are deleting” field, you must enter the
number of volumes that you want to remove. This verification was added to help avoid
deleting the wrong mappings.
If you still have target volumes that are inconsistent with the source volumes and you want
to delete these FlashCopy mappings, select Delete the FlashCopy mapping even when
the data on the target volume is inconsistent, or if the target volume has other
dependencies.
Click Delete, as shown in Figure 10-202.

Figure 10-202 Delete FlashCopy Mapping

10.9.14 Deleting a FlashCopy Consistency Group

Important: Deleting a Consistency Group does not delete the FlashCopy mappings.

Complete the following steps to delete a FlashCopy Consistency Group:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups.
2. Select the FlashCopy Consistency Group that you want to delete.
3. Click Actions → Delete, as shown in Figure 10-203 on page 763.

762 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-203 Delete Consistency Group action

4. The Warning window opens, as shown in Figure 10-204. Click Yes.

Figure 10-204 Warning window

10.9.15 Starting the FlashCopy copy process


When the FlashCopy mapping is created, the copy process can be started. Only mappings
that are not members of a Consistency Group can be started individually. Complete the
following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then select FlashCopy Mappings.
2. In the table, choose the FlashCopy mapping that you want to start.
3. Click Actions → Start (as shown in Figure 10-205) to start the FlashCopy process.

Tip: You can also right-click a FlashCopy mapping and select Start.

Figure 10-205 Start the FlashCopy process action

Chapter 10. SAN Volume Controller operations using the GUI 763
4. You can check the FlashCopy progress in the Progress column of the table or in the
Running Tasks status area. After the task completes, the FlashCopy mapping status is in a
Copied state, as shown in Figure 10-206.

Figure 10-206 Checking the FlashCopy progress

10.9.16 Stopping the FlashCopy copy process


When a FlashCopy copy process is stopped, the target volume becomes invalid and it is set
offline by the SVC. The FlashCopy mapping copy must be retriggered to bring the target
volume online again.

Important: Stop a FlashCopy copy process only when the data on the target volume is
useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by the SVC.

Complete the following steps to stop a FlashCopy copy process:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and select FlashCopy Mappings.
2. Choose the FlashCopy mapping that you want to stop.
3. Click Actions → Stop (as shown in Figure 10-207) to stop the FlashCopy Consistency
Group copy process.

Figure 10-207 Stopping the FlashCopy copy process

The FlashCopy Mapping status changes to Stopped, as shown in Figure 10-208 on


page 765.

764 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-208 FlashCopy Mapping status

10.10 Copy Services: Managing remote copy


It is often easier to control working with Metro Mirror or Global Mirror by using the GUI, if you
have few mappings. When many mappings are used, run your commands by using the CLI.

In this section, we describe the tasks that you can perform at a remote copy level.

The following panels are used to visualize and manage your remote copies:
򐂰 The Remote Copy panel, as shown in Figure 10-209.
By using the Metro Mirror and Global Mirror Copy Services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same SVC clustered
system or on two separate SVC systems.
To access the Remote Copy panel, move the mouse pointer over the Copy Services
selection and click Remote Copy.

Figure 10-209 Remote Copy panel

򐂰 The Partnerships panel, as shown in Figure 10-210.


Partnerships can be used to create a disaster recovery environment, or to migrate data
between systems that are in separate locations. Partnerships define an association
between a local system and a remote system. To access the Partnerships panel, move the
mouse pointer over the Copy Services selection and click Partnerships.

Figure 10-210 Partnerships panel

Chapter 10. SAN Volume Controller operations using the GUI 765
10.10.1 System partnership
You can create more than a one-to-one system partnership that uses Fibre Channel, FCoE,
or IP. You can have a system partnership among multiple SVC clustered systems. You can
use this partnership to create the following types of configurations that use a maximum of four
connected SVC systems:
򐂰 Star configuration, as shown in Figure 10-211.

Figure 10-211 Star configuration

򐂰 Triangle configuration, as shown in Figure 10-212.

Figure 10-212 Triangle configuration

766 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Fully connected configuration, as shown in Figure 10-213.

Figure 10-213 Fully connected configuration

򐂰 Daisy-chain configuration, as shown in Figure 10-214.

Figure 10-214 Daisy-chain configuration

Important: All SVC clustered systems must be at level 5.1 or higher. A system can be
partnered with up to three remote systems. No more than four systems can be in the same
connected set. Only one IP partnership is supported.

10.10.2 Creating a Fibre Channel partnership between two remote SVC


systems
We perform this operation to create the partnership on both SVC systems by using FC.

Intra-cluster Metro Mirror: If you are creating an intra-cluster Metro Mirror, do not perform
this next step to create the SVC clustered system Metro Mirror partnership. Instead, go to
10.10.4, “Creating stand-alone remote copy relationships” on page 772.

To create an FC partnership between the SVC systems by using the GUI, complete the
following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click Partnerships. The Partnerships panel opens, as shown in Figure 10-216
on page 768.

Chapter 10. SAN Volume Controller operations using the GUI 767
Figure 10-215 Selecting Partnerships window

2. Click Create Partnership to create a partnership with another SVC system, as shown in
Figure 10-216.

Figure 10-216 Create a partnership

3. In the Create Partnership window (Figure 10-217), complete the following information:
– Select the partnership type, either Fibre Channel or IP.

Figure 10-217 Select the type of partnership

– Select an available partner system from the drop-down list, as shown in Figure 10-218
on page 769. If no candidate is available, the following error message is displayed:
This system does not have any candidates.

768 Implementing the IBM System Storage SAN Volume Controller V7.4
– Enter a link bandwidth (Mbps) that is used by the background copy process between
the systems in the partnership. Set this value so that it is less than or equal to the
bandwidth that can be sustained by the communication link between the systems. The
link must sustain any host requests and the rate of the background copy.
– Enter the background copy rate.
– Click OK to confirm the partnership relationship.

Figure 10-218 Create Partnership window

4. As shown in Figure 10-219, our partnership is in the Partially Configured state because
this work was performed only on one side of the partnership so far.

Figure 10-219 Viewing system partnerships

To fully configure the partnership between both systems, perform the same steps on the
other SVC system in the partnership. For simplicity and brevity, we show only the two most
significant windows when the partnership is fully configured.
5. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership. We
specify the available bandwidth for the background copy (200 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows
(which are shown in Figure 10-220 and Figure 10-221 on page 770) confirm that our
remote system partnership is now in the Fully Configured state. (Figure 10-219 shows the
remote system ITSO SVC 5 from the local system ITSO SVC 3.)

Figure 10-220 System ITSO SVC 3: Fully configured remote partnership

Chapter 10. SAN Volume Controller operations using the GUI 769
Figure 10-221 shows the remote system ITSO SVC 3 from the local system ITSO SVC 5.

Figure 10-221 System ITSO SVC 5: Fully configured remote partnership

10.10.3 Creating an IP partnership between remote SVC systems


For more information about this feature, see the IBM Knowledge Center under the Metro
Mirror and Global Mirror > IP partnership requirements topic:
https://ibm.biz/BdEpPB

To create an IP partnership between SVC systems by using the GUI, complete the following
steps:
1. From the SVC System panel, move the mouse pointer over Copy Services and click
Partnerships. For type, select IP, as shown in Figure 10-222.

Figure 10-222 Create Partnership: Select IP

1. The Partnerships panel opens, as shown in Figure 10-215 on page 768.


2. Click Create Partnership to create a partnership with another SVC system, as shown in
Figure 10-216 on page 768.
3. In the Create Partnership window (as shown in Figure 10-223 on page 771), complete the
following information:
– Select the IP partnership type.
– Enter the IP address of the remote partner system.
– Select the link bandwidth in a unit of Mbps that is used by the background copy
process between the systems in the partnership. Set this value so that it is less than or
equal to the bandwidth that can be sustained by the communication link between the
systems. The link must sustain any host requests and the rate of the background copy.
– Select the background copy rate.
– If you want, enable CHAP authentication by providing a CHAP secret.

770 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-223 Create Partnership window for IP

As shown in Figure 10-224, our partnership is in the Partially Configured state because
only the work on one side of the partnership was completed so far.

Figure 10-224 Viewing system partnerships

To fully configure the partnership between both systems, we must perform the same steps
on the other SVC system in the partnership. For simplicity and brevity, we only show the
two most significant windows when the partnership is fully configured.
4. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership.
Specify the available bandwidth for the background copy (100 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows (as
shown in Figure 10-225 and Figure 10-226 on page 772) confirm that our remote system
partnership is now in the Fully Configured state. Figure 10-225 shows the remote system
ITSO SVC 5 from the local system ITSO SVC 3.

Figure 10-225 System ITSO SVC 3: Fully configured remote partnership

Figure 10-226 on page 772 shows the remote system ITSO SVC 3 from the local system
ITSO SVC 5.

Chapter 10. SAN Volume Controller operations using the GUI 771
Figure 10-226 System ITSO SVC 5: Fully configured remote partnership

Note: The bandwidth setting definition that is used when the IP partnership is created
changed. Previously, the bandwidth setting defaulted to 50 MBs, and it was the maximum
transfer rate from the primary site to the secondary site for initial volume sync or resync.

The link bandwidth setting is now configured by using Mbits not MBs and you set this link
bandwidth setting to a value that the communication link can sustain or what is allocated
for replication. The background copy rate setting is now a percentage of the link bandwidth
and it determines the bandwidth that is available for initial sync and resync or for Global
Mirror with Change Volumes.

10.10.4 Creating stand-alone remote copy relationships


In this section, we create remote copy mappings for volumes with their respective remote
targets. The source and target volumes were created before this operation was done on both
systems. The target volume must have the same size as the source volume.

Complete the following steps to create stand-alone copy relationships:


1. From the SVC System panel, select Copy Services → Remote Copy → Actions.
2. Click Create Relationship, as shown in Figure 10-227.

Figure 10-227 Create Relationship action

3. In the Create Relationship window, select one of the following types of relationships that
you want to create (as shown in Figure 10-228 on page 773):
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
However, the copy might not contain the last few updates if a disaster recovery
operation is performed.

772 Implementing the IBM System Storage SAN Volume Controller V7.4
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume. Changes can
then be copied to the remote system asynchronously. The FlashCopy relationship
exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volume is for internal use. The user cannot
manipulate it as they can with a normal FlashCopy mapping. Most svctask *fcmap
commands fail.

Figure 10-228 Select the type of relationship that you want to create

4. We want to create a Metro Mirror relationship. See Figure 10-229.

Figure 10-229 Selecting Metro Mirror as the type of relationship

Click Next.

Chapter 10. SAN Volume Controller operations using the GUI 773
5. In the next window, select the location of the auxiliary volumes, as shown in
Figure 10-230:
– On this system, which means that the volumes are local.
– On another system, which means that you select the remote system from the
drop-down list.
After you make a selection, click Next.

Figure 10-230 Specifying the location of the auxiliary volumes

6. In the New Relationship window that is shown in Figure 10-231, you can create
relationships. Select a master volume in the Master drop-down list. Then, select an
auxiliary volume in the Auxiliary drop-down list for this master and click Add. If needed,
repeat this step to create other relationships.

Figure 10-231 Select a volume for mirroring

Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list box for a specific source
volume.

7. To remove a relationship that was created, click , as shown in Figure 10-232 on


page 775.

774 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-232 Create the relationships between the master and auxiliary volumes

After all of the relationships that you want to create are shown, click Next.
8. Specify whether the volumes are synchronized, as shown in Figure 10-233. Then, click
Next.

Figure 10-233 Volumes are already synchronized

9. In the last window, select whether you want to start to copy the data, as shown in
Figure 10-234. Click Finish.

Figure 10-234 Synchronize now

Chapter 10. SAN Volume Controller operations using the GUI 775
10.Figure 10-235 shows that the task to create the relationship is complete.

Figure 10-235 Creation of Remote Copy relationship complete

The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status is Inconsistent Copying. You can check the copying progress in
the Running Tasks status area, as shown in Figure 10-236.

Figure 10-236 Remote Copy panel with an inconsistent copying status

After the copy is finished, the relationship status changes to Consistent synchronized.
Figure 10-237 on page 777 shows the Consistent synchronized status.

776 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-237 Consistent copy of the mirrored volumes

10.10.5 Creating a Consistency Group


To create a Consistency Group, complete the following steps:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Click Create Consistency Group, as shown in Figure 10-238.

Figure 10-238 Selecting the Create Consistency Group option

3. Enter a name for the Consistency Group, and then, click Next, as shown in Figure 10-239.

Consistency Group name: If you do not provide a name, the SVC automatically
generates the name rccstgrpX, where X is the ID sequence number that is assigned by
the SVC internally. You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The Consistency Group name can be 1 - 15 characters. No
blanks are allowed.

Figure 10-239 Enter a Consistency Group name

Chapter 10. SAN Volume Controller operations using the GUI 777
4. In the next window, select where the auxiliary volumes are located, as shown in
Figure 10-240:
– On this system, which means that the volumes are local
– On another system, which means that you select the remote system in the drop-down
list
After you make a selection, click Next.

Figure 10-240 Location of auxiliary volumes

5. Select whether you want to add relationships to this group, as shown in Figure 10-241.
The following options are available:
– If you select Yes, click Next to continue the wizard and go to step 6.
– If you select No, click Finish to create an empty Consistency Group that can be used
later.

Figure 10-241 Add relationships to this group

6. Select one of the following types of relationships to create, as shown in Figure 10-242 on
page 779:
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.

778 Implementing the IBM System Storage SAN Volume Controller V7.4
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated,
but the copy might not contain the last few updates if a disaster recovery operation is
performed.
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume.
Changes can then be copied to the remote system asynchronously. The FlashCopy
relationship exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volumes is for internal use. The user cannot
manipulate this type of mapping like a normal FlashCopy mapping.
Most svctask *fcmap commands fail.
Click Next.

Figure 10-242 Select the type of relationship that you want to create

7. As shown in Figure 10-243 on page 780, you can optionally select existing relationships to
add to the group. Click Next.

Note: To select multiple relationships, hold down Ctrl and click the entries that you want
to include.

Chapter 10. SAN Volume Controller operations using the GUI 779
Figure 10-243 Select existing relationships to add to the group

8. In the window that is shown in Figure 10-244, you can create relationships. Select a
volume in the Master drop-down list. Then, select a volume in the Auxiliary drop-down list
for this master. Click Add, as shown in Figure 10-244. Repeat this step to create other
relationships, if needed.

Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are displayed for a specific source volume.

To remove a relationship that was created, click (Figure 10-244). After all of the
relationships that you want to create are displayed, click Next.

Figure 10-244 Create relationships between the master and auxiliary volumes

780 Implementing the IBM System Storage SAN Volume Controller V7.4
9. Specify whether the volumes are already synchronized. Then, click Next, as shown in
Figure 10-245.

Figure 10-245 Volumes are already synchronized

10.In the last window, select whether you want to start to copy the data. Then, click Finish, as
shown in Figure 10-246.

Figure 10-246 Synchronize now

11.The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status of the relationships is Inconsistent copying. You can check the
copying progress in the Running Tasks status area, as shown in Figure 10-247 on
page 782.

Chapter 10. SAN Volume Controller operations using the GUI 781
Figure 10-247 Consistency Group created with relationship in copying and synchronized status

After the copies are completed, the relationships and the Consistency Group change to the
Consistent synchronized status.

10.10.6 Renaming a Consistency Group


To rename a Consistency Group, complete the following steps:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the panel, select the Consistency Group that you want to rename. Then, select
Actions → Rename, as shown in Figure 10-248.

Figure 10-248 Renaming a Consistency Group

3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 10-249 on page 783.

782 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-249 Changing the name for a Consistency Group

Consistency Group name: The Consistency Group name can consist of the letters
A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore (_) character. The
name can be 1 - 15 characters. However, the name cannot start with a number, dash,
or an underscore character. No blanks are allowed.

The new Consistency Group name is displayed on the Remote Copy panel.

10.10.7 Renaming a remote copy relationship


Complete the following steps to rename a remote copy relationship:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the table, select the remote copy relationship mapping that you want to rename. Click
Actions → Rename, as shown in Figure 10-250.

Tip: You can also right-click a remote copy relationship and select Rename.

Figure 10-250 Rename remote copy relationship action

3. In the Rename Relationship window, enter the new name that you want to assign to the
FlashCopy mapping and click Rename, as shown in Figure 10-251 on page 784.

Chapter 10. SAN Volume Controller operations using the GUI 783
Figure 10-251 Renaming a remote copy relationship

Remote copy relationship name: You can use the letters A - Z and a - z, the numbers
0 - 9, and the underscore (_) character. The remote copy name can be 1 - 15
characters. No blanks are allowed.

10.10.8 Moving a stand-alone remote copy relationship to a Consistency


Group
Complete the following steps to move a remote copy relationship to a Consistency Group:
1. From the SVC System panel, click Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. Select the relationship that you want to move to the Consistency Group.
4. Click Actions → Add to Consistency Group, as shown in Figure 10-252.

Tip: You can also right-click a remote copy relationship and select Add to Consistency
Group.

Figure 10-252 Add to Consistency Group action

5. In the Add Relationship to Consistency Group window, select the Consistency Group for
this remote copy relationship by using the drop-down list, as shown in Figure 10-253 on
page 785. Click Add to Consistency Group to confirm your changes.

784 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-253 Adding a relationship to a Consistency Group

10.10.9 Removing a remote copy relationship from a Consistency Group


Complete the following steps to remove a remote copy relationship from a Consistency
Group:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Select a Consistency Group.
3. Select the remote copy relationship that you want to remove from the Consistency Group.
4. Click Actions → Remove from Consistency Group, as shown in Figure 10-254.

Tip: You can also right-click a remote copy relationship and select Remove from
Consistency Group.

Figure 10-254 Remove from Consistency Group action

5. In the Remove Relationship From Consistency Group window, click Remove, as shown in
Figure 10-255 on page 786.

Chapter 10. SAN Volume Controller operations using the GUI 785
Figure 10-255 Remove a relationship from a Consistency Group

10.10.10 Starting a remote copy relationship


When a remote copy relationship is created, the remote copy process can be started. Only
relationships that are not members of a Consistency Group, or the only relationship in a
Consistency Group, can be started individually.

Complete the following steps to start a remote copy relationship:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to start.
4. Click Actions → Start to start the remote copy process, as shown in Figure 10-256.

Tip: You can also right-click a relationship and select Start from the list.

Figure 10-256 Starting the remote copy process

5. If the relationship was not consistent, you can check the remote copy progress in the
Running Tasks status area, as shown in Figure 10-257 on page 787.

786 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-257 Checking the remote copy synchronization progress

6. After the task is complete, the remote copy relationship status has a Consistent
Synchronized state, as shown in Figure 10-258.

Figure 10-258 Consistent Synchronized remote copy relationship

10.10.11 Starting a remote copy Consistency Group


All of the mappings in a Consistency Group are brought to the same state. To start the remote
copy Consistency Group, complete the following steps:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Select the Consistency Group that you want to start, as shown in Figure 10-259 on
page 788.

Chapter 10. SAN Volume Controller operations using the GUI 787
Figure 10-259 Remote Copy Consistency Groups view

3. Click Actions → Start (Figure 10-260) to start the remote copy Consistency Group.

Figure 10-260 Start action

4. You can check the remote copy Consistency Group progress, as shown in Figure 10-261.

Figure 10-261 Checking the remote copy Consistency Group progress

5. After the task completes, the Consistency Group and all of its relationships are in a
Consistent Synchronized state, as shown in Figure 10-262 on page 789.

788 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-262 Consistent Synchronized Consistency Group

10.10.12 Switching the copy direction for a remote copy relationship


When a remote copy relationship is in the Consistent synchronized state, the copy direction
for the relationship can be changed. Only relationships that are not a member of a
Consistency Group (or the only relationship in a Consistency Group) can be switched
individually. These relationships can be switched from master to auxiliary or from auxiliary to
master, depending on the case.

Important: When the copy direction is switched, no outstanding I/O can exist to the
volume that changes from primary to secondary because all I/O is inhibited to that volume
when it becomes the secondary. Therefore, careful planning is required before you switch
the copy direction for a remote copy relationship.

Complete the following steps to switch a remote copy relationship:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to switch.
4. Click Actions → Switch (Figure 10-263 on page 790) to start the remote copy process.

Tip: You can also right-click a relationship and select Switch.

Chapter 10. SAN Volume Controller operations using the GUI 789
Figure 10-263 Switch copy direction action

5. The Warning window that is shown in Figure 10-264 opens. A confirmation is needed to
switch the remote copy relationship direction. The remote copy is switched from the
master volume to the auxiliary volume. Click Yes.

Figure 10-264 Warning window

Figure 10-265 on page 791 shows the command-line output about this task.

790 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-265 Command-line output for switch relationship action

The copy direction is now switched, as shown in Figure 10-266. The auxiliary volume is
now accessible and shown as the primary volume. Also, the auxiliary volume is now
synchronized to the master volume.

Figure 10-266 Checking remote copy synchronization direction

10.10.13 Switching the copy direction for a Consistency Group


When a Consistency Group is in the Consistent synchronized state, the copy direction for this
Consistency Group can be changed.

Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all of the I/O is inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a Consistency Group.

Chapter 10. SAN Volume Controller operations using the GUI 791
Complete the following steps to switch a Consistency Group:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Select the Consistency Group that you want to switch.
3. Click Actions → Switch (as shown in Figure 10-267) to start the remote copy process.

Tip: You can also right-click a relationship and select Switch.

Figure 10-267 Switch action

4. The warning window that is shown in Figure 10-268 opens. A confirmation is needed to
switch the Consistency Group direction. In the example that is shown in Figure 10-268, the
Consistency Group is switched from the master group to the auxiliary group. Click Yes.

Figure 10-268 Warning window for ITSO SVC 3

The remote copy direction is now switched, as shown in Figure 10-269 on page 793. The
auxiliary volume is now accessible and shown as a primary volume.

792 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-269 Checking Consistency Group synchronization direction

10.10.14 Stopping a remote copy relationship


After it is started, the remote copy process can be stopped, if needed. Only relationships that
are not a member of a Consistency Group (or the only relationship in a Consistency Group)
can be stopped individually. You can also use this command to enable write access to a
consistent secondary volume.

Complete the following steps to stop a remote copy relationship:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to stop.
4. Click Actions → Stop (as shown in Figure 10-270) to stop the remote copy process.

Tip: You can also right-click a relationship and select Stop from the list.

Figure 10-270 Stop action

5. The Stop Remote Copy Relationship window opens, as shown in Figure 10-271 on
page 794. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Relationship.

Chapter 10. SAN Volume Controller operations using the GUI 793
Figure 10-271 Stop Remote Copy Relationship window

6. Figure 10-272 shows the command-line text for the stop remote copy relationship.

Figure 10-272 Stop remote copy relationship command-line output

The new relationship status can be checked, as shown in Figure 10-273 on page 795. The
relationship is now Consistent Stopped.

794 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-273 Checking remote copy synchronization status

10.10.15 Stopping a Consistency Group


After it is started, the Consistency Group can be stopped, if necessary. You can also use this
task to enable write access to consistent secondary volumes.

Perform the following steps to stop a Consistency Group:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the table, select the Consistency Group that you want to stop.
3. Click Actions → Stop (as shown in Figure 10-274) to stop the remote copy Consistency
Group.

Tip: You can also right-click a relationship and select Stop from the list.

Figure 10-274 Selecting the Stop option

4. The Stop Remote Copy Consistency Group window opens, as shown in Figure 10-275 on
page 796. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Consistency Group.

Chapter 10. SAN Volume Controller operations using the GUI 795
Figure 10-275 Stop Remote Copy Consistency Group window

The new relationship status can be checked, as shown in Figure 10-276. The relationship
is now Consistent Stopped.

Figure 10-276 Checking remote copy synchronization status

10.10.16 Deleting stand-alone remote copy relationships


Complete the following steps to delete a stand-alone remote copy mapping:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the table, select the remote copy relationship that you want to delete.

Multiple remote copy mappings: To select multiple remote copy mappings, hold down
Ctrl and click the entries that you want.

3. Click Actions → Delete Relationship, as shown in Figure 10-277 on page 797.

Tip: You can also right-click a remote copy mapping and select Delete Relationship.

796 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-277 Selecting the Delete Relationship option

4. The Delete Relationship window opens (Figure 10-278). In the “Verify the number of
relationships that you are deleting” field, enter the number of volumes that you want to
remove. This verification was added to help to avoid deleting the wrong relationships.
Click Delete, as shown in Figure 10-278.

Figure 10-278 Delete remote copy relationship

10.10.17 Deleting a Consistency Group

Important: Deleting a Consistency Group does not delete its remote copy mappings.

Complete the following steps to delete a Consistency Group:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the left column, select the Consistency Group that you want to delete.
3. Click Actions → Delete, as shown in Figure 10-279 on page 798.

Chapter 10. SAN Volume Controller operations using the GUI 797
Figure 10-279 Selecting the Delete Consistency Group option

4. The warning window that is shown in Figure 10-280 opens. Click Yes.

Figure 10-280 Confirmation message

10.11 Managing the SAN Volume Controller clustered system


by using the GUI
This section describes the various configuration and administrative tasks that you can
perform on the SVC clustered system.

10.11.1 System status information


From the System panel, complete the following steps to display the system and node
information:
1. On the SVC System panel, move the mouse pointer over the Monitoring selection and
click System.
The System Status panel opens, as shown in Figure 10-281 on page 799.

798 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-281 System status panel

On the System status panel (beneath the SVC nodes), you can view the global storage
usage, as shown in Figure 10-282. By using this method, you can monitor the physical
capacity and the allocated capacity of your SVC system. You can change between the
Allocation view and the Compression view to see the capacity usage and space savings of
the Real-time Compression feature, as shown in Figure 10-283.

Figure 10-282 Physical capacity information: Allocation view

Figure 10-283 Physical capacity information: Compression view

Chapter 10. SAN Volume Controller operations using the GUI 799
10.11.2 View I/O Groups and their associated nodes
The System status panel shows an overview of the SVC system with its I/O Groups and their
associated nodes. As shown in Figure 10-284, the node status can be checked by using a
color code that represents its status.

Figure 10-284 System view with node status

You can click an individual node. You can right-click the node, as shown in Figure 10-285, to
open the list of actions.

Figure 10-285 System view with node properties

If you click Properties, you see the following view, as shown in Figure 10-286 on page 801.

800 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-286 Properties for a node

Under View in the list of actions, you can see information about the Fibre Channel Ports, as
shown in Figure 10-287.

Figure 10-287 View Fibre Channel ports

Figure 10-288 shows the Fibre Channel ports of an active node.

Figure 10-288 Fibre Channel ports of an active node

Left-click one node to see more details about a single node, as shown in Figure 10-289 on
page 802.

Chapter 10. SAN Volume Controller operations using the GUI 801
Figure 10-289 View of one node

10.11.3 View SVC clustered system properties


Complete the following steps to view the SVC clustered system properties:
1. To see more information about the system, select one node and right-click. The options
are shown in Figure 10-290.

Figure 10-290 Options for more information about that system

2. You can see the following information:


򐂰 Rename
򐂰 Modify Site
򐂰 Power Off
򐂰 Remove
򐂰 View
򐂰 Show Dependent Volumes
򐂰 Properties

802 Implementing the IBM System Storage SAN Volume Controller V7.4
10.11.4 Renaming the SAN Volume Controller clustered system
All objects in the SVC system have names that are user defined or system generated.
Choose a meaningful name when you create an object. If you do not choose a name for the
object, the system generates a name for you. A well-chosen name serves not only as a label
for an object, but also as a tool for tracking and managing the object. Choosing a meaningful
name is important if you decide to use configuration backup and restore.

Naming rules
When you choose a name for an object, the following rules apply:
򐂰 Names must begin with a letter.

Important: Do not start names by using an underscore (_) character even though it is
possible. The use of the underscore as the first character of a name is a reserved
naming convention that is used by the system configuration restore process.

򐂰 The first character cannot be numeric.


򐂰 The name can be a maximum of 63 characters with the following exceptions: The name
can be a maximum of 15 characters for Remote Copy relationships and groups. The
lsfabric command displays long object names that are truncated to 15 characters for
nodes and systems. Version 5.1.0 systems display truncated volume names when they
are partnered with a version 6.1.0 or later system that has volumes with long object names
(lsrcrelationshipcandidate or lsrcrelationship commands).
򐂰 Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), the
underscore (_) character, a period (.), a hyphen (-), and a space.
򐂰 Names must not begin or end with a space.
򐂰 Object names must be unique within the object type. For example, you can have a volume
named ABC and an MDisk called ABC, but you cannot have two volumes called ABC.
򐂰 The default object name is valid (object prefix with an integer).
򐂰 Objects can be renamed to their current names.

To rename the system from the System panel, complete the following steps:
1. Click Actions in the upper-left corner of the SVC System panel, as shown in
Figure 10-291.

Figure 10-291 Actions on the System panel

2. From the panel, select Rename System, as shown in Figure 10-292 on page 804.

Chapter 10. SAN Volume Controller operations using the GUI 803
Figure 10-292 Select Rename System

3. The panel opens, as shown in Figure 10-293. Specify a new name for the system and click
Rename.

Figure 10-293 Rename the system

System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.

4. The Warning window opens, as shown in Figure 10-294 on page 805. If you are using the
iSCSI protocol, changing the system name or the iSCSI Qualified Name (IQN) also
changes the IQN of all of the nodes in the system. Changing the system name or the IQN
might require the reconfiguration of all iSCSI-attached hosts. This reconfiguration might be
required because the IQN for each node is generated by using the system and node
names.

804 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-294 System rename warning

5. Click Yes.

10.11.5 Renaming the site information of the nodes


The SVC supports configuring site settings that describe the location of the nodes and
storage systems that are deployed in a stretched system configuration. This site information
configuration is only part of the configuration process for stretched systems. The site
information makes it possible for the SVC to manage and reduce the amount of data that is
transferred between the two sides of the system, which reduces the costs of maintaining the
system.

Three site objects are automatically defined by the SVC and numbered 1, 2, and 3. The SVC
creates the corresponding default names, site1, site2, and site3, for each of the site
objects. Site1 and site2 are the two sites that make up the two halves of the stretched
system, and site3 is the quorum disk. You can rename the sites to describe your data center
locations.

To rename the sites, follow these steps:


1. On the System panel, select Actions in the upper-left corner.
2. The Actions menu opens. Click Rename Sites, as shown in Figure 10-295.

Figure 10-295 Rename sites action

Chapter 10. SAN Volume Controller operations using the GUI 805
3. The Rename Sites panel with the site information opens, as shown in Figure 10-296.

Figure 10-296 Rename Sites default panel

4. Enter the appropriate site information. Figure 10-297 shows the updated Rename Sites
panel. Click Rename.

Figure 10-297 Rename the sites

10.11.6 Rename a node


To rename a node, follow these steps:
1. Right-click one of the nodes. The Properties panel for this node opens, as shown in
Figure 10-298 on page 807.
2. Click Rename.

806 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-298 Information panel for a node

3. The panel to rename the node opens, as shown in Figure 10-299. Enter the new name of
the node.

Figure 10-299 Enter the new name of the node

4. Click Rename. If required, repeat these steps for all remaining nodes.

10.11.7 Shutting down the SAN Volume Controller clustered system


If all input power to the SVC clustered system is removed for more than a few minutes (for
example, if the machine room power is shut down for maintenance), it is important that you
shut down the system before you remove the power. Shutting down the system while it is still
connected to the main power ensures that the uninterruptible power supply unit’s batteries or
the batteries of the DH8 nodes are still fully charged when the power is restored.

Important: Starting with the 2145-DH8 nodes, we no longer have uninterruptible power
supplies. The batteries are now included in the system.

If you remove the main power while the system is still running, the uninterruptible power
supply units or internal batteries detect the loss of power and instruct the nodes to shut down.
This shutdown can take several minutes to complete. Although the uninterruptible power
supply units or internal batteries have sufficient power to perform the shutdown, you
unnecessarily drain a unit’s batteries.

When power is restored, the SVC nodes start. However, one of the first checks that is
performed by the SVC node is to ensure that the batteries have sufficient power to survive
another power failure, which enables the node to perform a clean shutdown.

Chapter 10. SAN Volume Controller operations using the GUI 807
(You do not want the batteries to run out of power when the node’s shutdown activities did not
complete). If the batteries are not charged sufficiently, the node does not start.

It can take up to 3 hours to charge the batteries sufficiently for a node to start.

Important: When a node shuts down because of a power loss, the node dumps the cache
to an internal hard disk drive so that the cached data can be retrieved when the system
starts. With 2145-8F2/8G4 nodes, the cache is 8 GiB. With 2145-CF8/CG8 nodes, the
cache is 24 GiB. With 2145-DH8 nodes, the cache is up to 64 GiB. Therefore, this process
can take several minutes to dump to the internal drive.

The SVC uninterruptible power supply units or internal batteries are designed to survive at
least two power failures in a short time. After that time, the nodes do not start until the
batteries have sufficient power to survive another immediate power failure.

During maintenance activities, if the uninterruptible power supply units or batteries detect
power and then detect a loss of power multiple times (the nodes start and shut down more
than once in a short time), you might discover that you unknowingly drained the batteries. You
must wait until they are charged sufficiently before the nodes start.

Important: Before a system is shut down, quiesce all I/O operations that are directed to
this system because you lose access to all of the volumes that are serviced by this
clustered system. Failure to quiesce all I/O operations might result in failed I/O operations
that are reported to your host operating systems.

You do not need to quiesce all I/O operations if you are shutting down only one SVC node.

Begin the process of quiescing all I/O activity to the system by stopping the applications on
your hosts that are using the volumes that are provided by the system.

If you are unsure which hosts are using the volumes that are provided by the SVC system,
follow the procedure that is described in 9.6.21, “Showing the host to which the volume is
mapped” on page 540, and repeat this procedure for all volumes.

From the System status panel, complete the following steps to shut down your system:
1. Click Actions, as shown in Figure 10-300 on page 809. Select Power Off.

808 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-300 Action panel to power off the system

2. A confirmation window opens, as shown in Figure 10-301.

Figure 10-301 Confirmation window to confirm the shutdown of the system

3. Enter the code in the window to confirm the system shutdown (Figure 10-302 on
page 810). Ensure that you stopped all FlashCopy mappings, remote copy relationships,
data migration operations, and forced deletions before you continue.

Important: You lose all administrative contact with your system.

Chapter 10. SAN Volume Controller operations using the GUI 809
Figure 10-302 Shutting down the system confirmation window

4. Click Power Off to begin the shutdown process.

You completed the required tasks to shut down the system. You can now shut down the
uninterruptible power supply units by pressing the power buttons on their front panels. The
internal batteries of the 2145-DH8 nodes will shut down automatically with the nodes.

10.11.8 Power off a single node


You can power off a single node from the System menu:
1. Right-click the node that you want to power down.
2. Select Power Off from the menu that is shown in Figure 10-303.

Figure 10-303 Power off a single node

3. A confirmation window opens to ensure that you want to power off the control node, as
shown in Figure 10-304 on page 811.

810 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-304 Confirmation window to power down a node

4. Click Yes.

Tip: When you shut down the system, it does not automatically start. You must manually
start the SVC nodes. If the system shuts down because the uninterruptible power supply
units or batteries detected a loss of power, it automatically restarts when the uninterruptible
power supply units or batteries detect that the power was restored (and the batteries have
sufficient power to survive another immediate power failure).

Restarting the SVC system: To start the SVC system, you must first start the
uninterruptible power supply units by pressing the power buttons on their front panels. No
action is necessary for 2145-DH8 nodes. After they are on, go to the service panel of one
of the nodes within your SVC clustered system and press the power-on button and release
it quickly. After the node is fully booted (for example, displaying Cluster: on line 1 and the
system name on line 2 of the SVC front panel), you can start the other nodes in the same
way. When all of the nodes are fully booted and you reestablish administrative contact by
using the GUI, your system is fully operational again.

10.12 Upgrading software


The following sections describe software updates.

10.12.1 Updating system software


From the System status panel, complete the following steps to upgrade the software of your
SVC clustered system:
1. Click Actions.
2. Click Update → System, as shown in Figure 10-305 on page 812.

Chapter 10. SAN Volume Controller operations using the GUI 811
Figure 10-305 Action tab to update system software

3. Follow the process that is described in 10.18.2, “SAN Volume Controller upgrade test
utility” on page 859.

10.12.2 Update drive software


You can update drives by downloading and applying firmware updates. By using the
management GUI, you can update drives that are on the system. Drives that are attached to
the system that are on external system storage systems cannot be updated by using this
panel. The management GUI can update individual drives or update all drives that have
available updates. A drive is not updated if it meets the following conditions:
򐂰 The drive is offline.
򐂰 The drive is failed.
򐂰 The RAID array to which the drive belongs is not redundant.
򐂰 The drive firmware is the same as the update firmware.
򐂰 The drive has dependent volumes that will go offline during the update.
򐂰 The drive is used as a boot drive for the system.

To update drives, complete the following steps:


1. Go to http://www.ibm.com/storage/support/2145 to locate and download the firmware
update file to the system.
2. From the Actions menu, select Update → Drives to update all drives, as shown in
Figure 10-306 on page 813.

812 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-306 Update drive software

3. The Upgrade All Drives panel opens as shown in Figure 10-307.


4. Select the folder icon. An Explorer window opens, where you can select the upgrade
package, which is stored on your local disk.
5. Click Upgrade, as shown in Figure 10-307.

Figure 10-307 Enter the location of the upgrade package

To update the internal drives, select Pools → Internal Storage in the management GUI.
To update specific drives, select the drive or drives and select Actions → Update. Click
Browse and select the directory where you downloaded the firmware update file. Click
Upgrade. Depending on the number of drives and the size of the system, drive updates
can take up to 10 hours to complete.
6. To monitor the progress of the update, click the Running Tasks icon on the bottom center
of the management GUI window and then click Drive Update Operations. You can also
use the Monitoring → Events panel to view any completion or error messages that relate
to the update.

Chapter 10. SAN Volume Controller operations using the GUI 813
10.13 Managing I/O Groups
In SVC terminology, the I/O Group is represented by a pair of SVC nodes combined in a
clustered system. Nodes in each I/O Group must consist of similar hardware; however,
different I/O Groups of a system can be built from different devices, which is illustrated in
Figure 10-308.

Figure 10-308 Combination of hardware in different I/O Groups

In our lab environment, io_grp0 is built from the 2145-DH8 nodes and io_grp1 consists of a
previous model 2145-CF8. This configuration is typical when you are upgrading your data
center storage virtualization infrastructure to a newer SVC platform.

To see the I/O Group details, move the mouse pointer over Actions and click Properties. The
Properties are shown in Figure 10-309. Alternatively, hover the mouse pointer over the I/O
Group name and right-click to open a menu and navigate to Properties.

Figure 10-309 I/O Group information

The following information is shown in the panel:


򐂰 Name
򐂰 Version
򐂰 ID

814 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 I/O Groups
򐂰 Topology
򐂰 Control enclosures
򐂰 Expansion enclosures
򐂰 Internal capacity

10.14 Managing nodes


In this section, we describe how to manage the SVC nodes.

10.14.1 Viewing node properties


From the Monitoring → System panel, complete the following steps to review SVC node
properties:
1. Move the mouse over one of the nodes. Right-click this node and select Properties.
2. The Properties panel opens, as shown in Figure 10-310.

Figure 10-310 Properties of one node

3. The following tasks are available for this node (Figure 10-311).

Figure 10-311 Node tasks

Chapter 10. SAN Volume Controller operations using the GUI 815
The following tasks are shown:
– Rename
– Modify Site
– Identify
– Power Off
– Remove
– View → Fibre Channel Ports
– Show Dependent Volumes
– Properties
4. To view node hardware properties, move the mouse over the hardware parts of the node
(Figure 10-313). You must “turn” or rotate the machine in the GUI by clicking the Rotate
arrow with the mouse, as shown in Figure 10-312.

Figure 10-312 Rotate arrow

5. The System window (Figure 10-313) shows how to obtain additional information about
certain hardware parts.

Figure 10-313 Node hardware information

816 Implementing the IBM System Storage SAN Volume Controller V7.4
6. Right-click the FC adapter to open the Properties view (Figure 10-314).

Figure 10-314 Properties view of the FC adapter

7. Figure 10-315 shows the properties for an FC adapter.

Figure 10-315 Properties of an FC adapter

10.14.2 Renaming a node


For information, see 10.11.6, “Rename a node” on page 806.

10.14.3 Adding a node to the SAN Volume Controller clustered system


Before you add a node to a system, ensure that you configure the switch zoning so that the
node that you add is in the same zone as all other nodes in the system. If you are replacing a
node and the switch is zoned by worldwide port name (WWPN) rather than by switch port,
you must follow the service instructions carefully to continue to use the same WWPNs.
Complete the following steps to add a node to the SVC clustered system:
1. If the switch setting is correct, you see the additional I/O Group as a gray empty frame on
the System panel. Figure 10-316 on page 818 shows this empty frame.

Chapter 10. SAN Volume Controller operations using the GUI 817
Figure 10-316 Available I/O Group on the System panel

2. Right-click the empty gray frame and the Action panel for the system opens, as shown in
Figure 10-317. Click Add Nodes.

Figure 10-317 Panel to add nodes

3. Also, you can use the left mouse key to click directly into the gray frame of the I/O Group to
display the Click to Add option (Figure 10-318 on page 819).

818 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-318 Left-click the gray I/O Group to display the Click to Add option

4. In the Add Nodes window (Figure 10-319), you see the two available nodes, which are in
candidate mode and able to join the cluster.

Figure 10-319 Two available nodes

Important: You must have at least two nodes in an I/O Group.

5. Click Add. The system adds one node after the other. You can check this action if you
hover with the mouse pointer over the new I/O Group. See Figure 10-320.

Figure 10-320 The process to add one node

Chapter 10. SAN Volume Controller operations using the GUI 819
Important: When a node is added to a system, it displays a state of “Adding” and a yellow
warning triangle with an exclamation point. The process to add a node to the system can
take up to 30 minutes, particularly if the software version of the node changes. The added
nodes are updated to the code version of the running cluster.

10.14.4 Removing a node from the SAN Volume Controller clustered system
From the System panel, complete the following steps to remove a node:
1. Select a node and right-click it, as shown in Figure 10-321. Select Remove.

Figure 10-321 Remove a node from the SVC clustered system action

2. The Warning window that is shown in Figure 10-322 opens.

Figure 10-322 Warning window when you remove a node

820 Implementing the IBM System Storage SAN Volume Controller V7.4
By default, the cache is flushed before the node is deleted to prevent data loss if a failure
occurs on the other node in the I/O Group.
In certain circumstances, such as when the system is degraded, you can take the
specified node offline immediately without flushing the cache or ensuring that data loss
does not occur. Select Bypass check for volumes that will go offline, and remove the
node immediately without flushing its cache.
3. Click Yes to confirm the removal of the node. See the System Details panel to verify a
node removal, as shown in Figure 10-323.

Figure 10-323 System Details panel with one SVC node removed

4. Figure 10-324 shows the output of the command line.

Figure 10-324 Command-line output for removing one node

If this node is the last node in the system, the warning message differs, as shown in
Figure 10-325 on page 822. Before you delete the last node in the system, ensure that you
want to destroy the system. The user interface and any open CLI sessions are lost.

Chapter 10. SAN Volume Controller operations using the GUI 821
Figure 10-325 Warning window for the removal of the last node in the cluster

After you click OK, the node is a candidate to be added back into this system or into
another system.

10.15 Troubleshooting
The events that are detected by the system are saved in a system event log. When an entry is
made in this event log, the condition is analyzed and classified to help you diagnose
problems.

10.15.1 Events panel


In the Monitoring actions selection menu, the Events panel (Figure 10-326) displays the event
conditions that require action and the procedures to diagnose and fix them.

To access this panel from the SVC System panel, move a mouse pointer over Monitoring in
the dynamic menu and select Events.

Figure 10-326 Monitoring: Events selection

822 Implementing the IBM System Storage SAN Volume Controller V7.4
The list of system events opens with the highest-priority event indicated and information
about how long ago the event occurred. Click Close to return to the Recommended Actions
panel.

Note: If an event is reported, you must select the event and run a fix procedure.

Running the fix procedure


To run a procedure to fix an event, complete the following steps:
1. In the table, select an event.
2. Click Actions → Run Fix Procedure, as shown in Figure 10-327.

Tip: You can also click Run Fix at the top of the panel (Figure 10-327 on page 823) to
solve the most critical event.

Figure 10-327 Run Fix Procedure action

Tip: You can also access the Run Fix Procedure action by right-clicking an event.

3. The Directed Maintenance Procedure window opens, as shown in Figure 10-328. Follow
the steps in the wizard to fix the event.

Sequence of steps: We do not describe all of the possible steps here because the
steps that are involved depend on the specific event. The process is always interactive
and you are guided through the entire process.

Figure 10-328 Directed Maintenance Procedure wizard

4. Click Cancel to return to the Recommended Actions panel.

Chapter 10. SAN Volume Controller operations using the GUI 823
10.15.2 Event log
In the Events panel (Figure 10-329), you can choose to display the SVC event log by
Recommended Actions, Unfixed Messages and Alerts, or Show All events.

To access this panel from the SVC System panel that is shown in Figure 10-1 on page 656,
move the mouse pointer over the Monitoring selection in the dynamic menu and click Events.
Then, in the upper-left corner of the panel, select Recommended actions, Unfixed messages
and alerts, or Show all.

Figure 10-329 SVC event log

Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem.
Other alerts also require an action, but they do not have a fix procedure. Messages are fixed
when you acknowledge reading them.

Filtering events
You can filter events in various ways. Filtering can be based on event status, as described in
“Basic filtering”, or over a period, as described in “Time filtering” on page 825. You can also
search the event log for a specific text string by using table filtering, as described in “Overview
window” on page 661.

Certain events require a specific number of occurrences in 25 hours before they are displayed
as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired.
Monitoring events are beneath the coalesce threshold and are transient.

You can also sort events by time or error code. When you sort by error code, the most serious
events (those events with the lowest numbers) are displayed first.

Basic filtering
You can filter the Event log display in one of the following ways by using the drop-down menu
in the upper-left corner of the panel (Figure 10-330 on page 825):
򐂰 Display all unfixed alerts and messages: Select Recommended Actions to show all events
that require your attention.
򐂰 Display all alerts and messages: Select Unfixed Messages and Alerts.
򐂰 Display all event alerts, messages, monitoring, and expired events: Select Show All, which
includes the events that are under the threshold.

824 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-330 Filter Event Log display

Time filtering
You can use the following methods to perform time filtering:
򐂰 Select a start date and time, and an end date and time frame filter. Complete the following
steps to use this method:
a. Click Actions → Filter by Date, as shown in Figure 10-331.

Figure 10-331 Filter by Date action

Tip: You can also access the Filter by Date action by right-clicking an event.

b. The Date/Time Filter window opens, as shown in Figure 10-332. From this window,
select a start date and time and an end date and time.

Figure 10-332 Date/Time Filter window

c. Click Filter and Close. Your panel is now filtered based on the time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 10-333 on page 826.

Chapter 10. SAN Volume Controller operations using the GUI 825
Figure 10-333 Reset Date Filter action

򐂰 Select an event and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an event.
b. Click Actions → Show entries within. Select minutes, hours, or days, and select a
value, as shown in Figure 10-334.

Figure 10-334 Show entries within a certain amount of time after this event

Tip: You can also access the Show entries within action by right-clicking an event.

c. Now, your window is filtered based on the time frame, as shown in Figure 10-335.

Figure 10-335 Time frame filtering

To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 10-336 on page 827.

826 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-336 Reset Date Filter action

Marking an event as fixed


To mark one or more events as fixed, complete the following steps:
1. In the table, select one or more entries.

Tip: To select multiple events, hold down Ctrl and click the entries that you want to
select.

2. Click Actions → Mark as Fixed, as shown in Figure 10-337.

Figure 10-337 Mark as Fixed action

Tip: You can also access the Mark as Fixed action by right-clicking an event.

3. The Warning window that is shown in Figure 10-338 opens. Click Yes.

Figure 10-338 Warning window

Chapter 10. SAN Volume Controller operations using the GUI 827
Exporting event log entries
You can export event log entries to a comma-separated values (CSV) file for further
processing and enhanced filtering with external applications. You can export a full event log or
a filtered result that is based on your requirements. To export an event log entry, complete the
following steps:
1. From the Events panel, show and sort or filter the table to provide the results that you want
to export into a CSV file.
2. Click the diskette icon () and save the file to your workstation, as shown in
Figure 10-339.

Figure 10-339 Export event log to a CSV file

3. You can view the file by using Notepad or another program, as shown in Figure 10-340.

Figure 10-340 Viewing the CSV file in Notepad

Clearing the log


To clear the logs, complete the following steps:
1. Click Actions → Clear Log, as shown in Figure 10-341 on page 829.

828 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-341 Clear log

2. The Warning window that is shown in Figure 10-342 opens. From this window, you must
confirm that you want to clear all entries from the error log.

Figure 10-342 Warning window

3. Click Yes.

10.15.3 Support panel


From the support panel that is shown in Figure 10-343 on page 830, you can download a
support package that contains log files and information that can be sent to support personnel
to help troubleshoot the system. You can download individual log files or download statesaves,
which are dumps or livedumps of system data.

Chapter 10. SAN Volume Controller operations using the GUI 829
Figure 10-343 Support panel

Downloading the support package


To download the support package, complete the following steps:
1. Click Download Support Package, as shown in Figure 10-344.

Figure 10-344 Download Support Package

The Download Support Package window opens, as shown in Figure 10-345 on page 831.

830 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-345 Download Support Package window

The duration varies: Depending on your choice, this action can take several minutes
to complete.

From this window, select the following types of logs that you want to download:
– Standard logs
These logs contain the most recent logs that were collected for the system. These logs
are the most commonly used by support to diagnose and solve problems.
– Standard logs plus one existing statesave
These logs contain the standard logs for the system and the most recent statesaves
from any of the nodes in the system. Statesaves are also known as dumps or
livedumps.
– Standard logs plus most recent statesave from each node
These logs contain the standard logs for the system and the most recent statesaves
from each node in the system. Statesaves are also known as dumps or livedumps.
– Standard logs plus new statesaves
These logs generate new statesaves (livedumps) for all the nodes in the system and
package the statesaves with the most recent logs.
2. Click Download, as shown in Figure 10-345.
3. Select where you want to save the logs, as shown in Figure 10-346 on page 832.

Chapter 10. SAN Volume Controller operations using the GUI 831
Figure 10-346 Save the log file on your personal workstation

Download individual packages


To download packages manually, complete the following tasks:
1. Activate the individual log files view by clicking Show full log listing, as shown in
Figure 10-347.

Figure 10-347 Show full log listing link

2. In the detailed view, select the node from which you want to download the logs by using
the drop-down menu that is in the upper-left corner of the panel, as shown in
Figure 10-348.

Figure 10-348 Node selection

3. Select the package or packages that you want to download, as shown in Figure 10-349 on
page 833.

832 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-349 Selecting individual packages

Tip: To select multiple packages, hold down Ctrl and click the entries that you want to
include.

4. Click Actions → Download, as shown in Figure 10-350.

Figure 10-350 Download packages

5. Select where you want to save these logs on your workstation.

Tip: You can also delete packages by clicking Delete in the Actions menu.

CIMOM logging level


Select this option to include the Common Information Model (CIM) object manager (CIMOM)
tracing components and logging details.

Maximum logging level: The maximum logging level can have a significant effect on the
performance of the CIMOM interface.

Chapter 10. SAN Volume Controller operations using the GUI 833
To change the CIMOM logging level to high, medium, or low, use the drop-down menu in the
upper-right corner of the panel, as shown in Figure 10-351.

Figure 10-351 Change the CIMOM logging level

10.16 User management


Users are managed from within the Access selection section of the dynamic menu in the SVC
GUI, as shown in Figure 10-352.

Figure 10-352 Users panel

Each user account has a name, role, and password assigned to it, which differs from the
Secure Shell (SSH) key-based role approach that is used by the CLI. Starting with version
6.3, you can access the CLI with a password and no SSH key.

Note: Use the default Superuser account only for initial configuration and emergency
access. Change its default passw0rd. Always define individual accounts for the users.

The role-based security feature organizes the SVC administrative functions into groups,
which are known as roles, so that permissions to run the various functions can be granted
differently to the separate administrative users. Table 10-1 on page 835 lists the four major
roles and one special role.

834 Implementing the IBM System Storage SAN Volume Controller V7.4
Table 10-1 Authority roles
Role Allowed commands User

Security Admin All commands Superusers

Administrator All commands except the Administrators that control the


following commands: SVC
svctask, chauthservice,
mkuser, rmuser, chuser,
mkusergrp, rmusergrp,
chusergrp, and setpwdreset

Copy Operator All svcinfo commands and the For users that control all copy
following svctask commands: functionality of the cluster
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp,
prestartfcmap, startfcmap,
stopfcmap, chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership

Service All svcinfo commands and the For users that perform service
following svctask commands: maintenance and other
applysoftware, setlocale, hardware tasks on the cluster
addnode, rmnode, cherrstate,
writesernum, detectmdisk,
includemdisk, clearerrlog,
cleardumps, settimezone,
stopcluster, startstats,
stopstats, and settime

Monitor All svcinfo commands and the For users that need view
following svctask commands: access only
finderr, dumperrlog,
dumpinternallog,
chcurrentuser, and the
svcconfig command: backup

The superuser user is a built-in account that has the Security Admin user role permissions.
You cannot change permissions or delete this superuser account; you can only change the
password. You can also change this password manually on the front panels of the clustered
system nodes.

An audit log tracks actions that are issued through the management GUI or CLI. For more
information, see 10.16.9, “Audit log information” on page 845.

Chapter 10. SAN Volume Controller operations using the GUI 835
10.16.1 Creating a user
Complete the following steps to create a user:
1. From the SVC System panel, move the mouse pointer over the Access selection in the
dynamic menu and click Users.
2. On the Users panel, click Create User, as shown in Figure 10-353.

Figure 10-353 Create User

3. The Create User window opens, as shown in Figure 10-354.

Figure 10-354 Create User window

4. Enter a new user name in the Name field.

836 Implementing the IBM System Storage SAN Volume Controller V7.4
User name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The user name can be 1 - 256 characters.

The following types of authentication are available in the Authentication Mode section:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 10-1 on page 835) to which you want this user to belong.
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported Lightweight Directory
Access Protocol (LDAP) service. Ensure that the remote authentication service is
supported by the SVC clustered system. For more information about remote user
authentication, see 2.9, “User authentication” on page 49.
The following types of local credentials can be configured in the Local Credentials section,
depending on your needs:
– Password authentication
The password authenticates users to the management GUI. Enter the password in the
Password field. Verify the password.

Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.

– SSH public/private key authentication


The SSH public key authenticates users to the CLI. Use Browse to locate and upload
the SSH public key. If you did not create an SSH key pair, you can still access the SVC
system by using your user name and password.
5. To create the user, click Create, as shown in Figure 10-354 on page 836.

10.16.2 Modifying the user properties


Complete the following steps to change the user properties:
1. From the SVC System panel, move the pointer over the Access selection in the dynamic
menu and click Users.
2. In the left column, select a User Group.
3. Select a user.
4. Click Actions → Properties, as shown in Figure 10-355 on page 838.

Tip: You can also change user properties by right-clicking a user and selecting
Properties from the list.

Chapter 10. SAN Volume Controller operations using the GUI 837
Figure 10-355 User Properties action

The User Properties window opens, as shown in Figure 10-356.

Figure 10-356 User Properties window

5. From the User Properties window, you can change the authentication mode and the local
credentials. For the authentication mode, choose the following type of authentication:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 10-1 on page 835) of which you want the user to be part.

838 Implementing the IBM System Storage SAN Volume Controller V7.4
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported LDAP service. Ensure that
the remote authentication service is supported by the SVC clustered system.
For the local credentials, the following types of local credentials can be configured in this
section, depending on your needs:
– Password authentication: The password authenticates users to the management GUI.
You must enter the password in the Password field. Verify the password.

Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.

– SSH public/private key authentication: The SSH key authenticates users to the CLI.
Use Browse to locate and upload the SSH public key.
6. To confirm the changes, click OK (Figure 10-356 on page 838).

10.16.3 Removing a user password

Important: To remove the password for a specific user, the SSH public key must be
defined. Otherwise, this action is not available.

Complete the following steps to remove a user password:


1. From the SVC System panel, move the pointer over the Access selection in the dynamic
menu and click Users.
2. Select the user.
3. Click Actions → Remove Password, as shown in Figure 10-357.

Tip: You can also remove the password by right-clicking a user and selecting Remove
Password.

Figure 10-357 Remove Password action

4. The Warning window that is shown in Figure 10-358 on page 840 opens. Click Yes.

Chapter 10. SAN Volume Controller operations using the GUI 839
Figure 10-358 Warning window

10.16.4 Removing a user SSH public key

Important: To remove the SSH public key for a specific user, the password must be
defined. Otherwise, this action is not available.

Complete the following steps to remove an SSH public key:


1. From the SVC System panel, move your mouse pointer over the Access selection in the
dynamic menu, and then click Users.
2. Select the user.
3. Click Actions → Remove SSH Key, as shown in Figure 10-359.

Tip: You can also remove the SSH public key by right-clicking a user and selecting
Remove SSH Key.

Figure 10-359 Remove SSH Key action

4. The Warning window that is shown in Figure 10-360 opens. Click Yes.

Figure 10-360 Warning window

840 Implementing the IBM System Storage SAN Volume Controller V7.4
10.16.5 Deleting a user
Complete the following steps to delete a user:
1. From the SVC System panel, move the mouse pointer over the Access selection, and then
click Users.
2. Select the user.

Important: To select multiple users to delete, hold down Ctrl and click the entries that
you want to delete.

3. Click Actions → Delete, as shown in Figure 10-361.

Tip: You can also delete a user by right-clicking the user and selecting Delete.

Figure 10-361 Delete a user action

10.16.6 Creating a user group


Five user groups are created, by default, on the SVC. If necessary, you can create other user
groups.

Complete the following steps to create a user group:


1. From the SVC System panel, move the pointer over the Access selection on the dynamic
menu, and then click Users. Click Create User Group, as shown in Figure 10-358 on
page 840.

Chapter 10. SAN Volume Controller operations using the GUI 841
Figure 10-362 Create User Group

The Create User Group window opens, as shown in Figure 10-363.

Figure 10-363 Create User Group window

2. Enter a name for the group in the Group Name field.

Group name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The group name can 1 - 63 characters.

A role must be selected among Monitor, Copy Operator, Service, Administrator, or


Security Administrator. For more information about these roles, see Table 10-1 on
page 835.

3. To create the user group name, click Create.


4. You can verify the user group creation in the User Groups panel, as shown in
Figure 10-364 on page 843.

842 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-364 Verify user group creation

10.16.7 Modifying the user group properties

Important: For preset user groups (SecurityAdmin, Administrator, CopyOperator, Service,


and Monitor), you cannot change the related roles.

Complete the following steps to change user group properties:


1. From the SVC System panel, move the mouse pointer over the Access selection on the
dynamic menu and click Users.
2. In the left column, select the User Group.
3. Click Actions → Properties, as shown in Figure 10-365.

Figure 10-365 Modify user group properties

4. The User Group Properties window opens (Figure 10-366 on page 844).

Chapter 10. SAN Volume Controller operations using the GUI 843
Figure 10-366 User Group Properties window

From this window, you can change the role. You must select a role among Monitor, Copy
Operator, Service, Administrator, or Security Administrator. For more information about
these roles, see Table 10-1 on page 835.
5. To confirm the changes, click OK, as shown in Figure 10-366.

10.16.8 Deleting a user group


Complete the following steps to delete a user group:
1. From the SVC System panel, move the mouse pointer over the Access selection on the
dynamic menu, and then click Users.
2. In the left column, select the User Group.
3. Click Actions → Delete, as shown in Figure 10-367 on page 845.

Important: You cannot delete the following preset user groups:


򐂰 SecurityAdmin
򐂰 Administrator
򐂰 CopyOperator
򐂰 Service
򐂰 Monitor

844 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-367 Delete User Group action

4. The following options are available:


– If you do not have any users in this group, the Delete User Group window opens, as
shown in Figure 10-368. Click Delete to complete the operation.

Figure 10-368 Delete User Group window

– If you have users in this group, the Delete User Group window opens, as shown in
Figure 10-369. The users of this group are moved to the Monitor user group.

Figure 10-369 Delete User Group window

10.16.9 Audit log information


An audit log tracks actions that are issued through the management GUI or the CLI. You can
use the audit log to monitor the user activity on your SVC clustered system.

Chapter 10. SAN Volume Controller operations using the GUI 845
To view the audit log, from the SVC System panel, move the pointer over the Access selection
on the dynamic menu and click Audit Log, as shown in Figure 10-370.

Figure 10-370 Audit Log entries

The audit log entries provide the following types of information:


򐂰 Time and date when the action or command was issued on the system
򐂰 Name of the user who performed the action or command
򐂰 IP address of the system where the action or command was issued
򐂰 Parameters that were issued with the command
򐂰 Results of the command or action
򐂰 Sequence number
򐂰 Object identifier that is associated with the command or action

Time filtering
The following methods are available to perform time filtering on the audit log:
򐂰 Select a start date and time and an end date and time.
To use this time frame filter, complete the following steps:
a. Click Actions → Filter by Date, as shown in Figure 10-371.

Figure 10-371 Audit log time filter

Tip: You can also access the Filter by Date action by right-clicking an entry.

846 Implementing the IBM System Storage SAN Volume Controller V7.4
b. The Date/Time Filter window opens (Figure 10-372). From this window, select a start
date and time and an end date and time.

Figure 10-372 Date/Time Filter window

c. Click Filter and Close. Your audit log panel is now filtered based on its time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 10-373.

Figure 10-373 Reset Date Filter action

򐂰 Select an entry and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an entry.
b. Click Actions → Show entries within. Select minutes, hours, or days. Then, select a
value, as shown in Figure 10-374.

Figure 10-374 Show entries within action

Tip: You can also access the Show entries within action by right-clicking an entry.

Your panel is now filtered based on the time frame.


To disable this time frame filter, click Actions → Reset Date Filter.

Chapter 10. SAN Volume Controller operations using the GUI 847
10.17 Configuration
In this section, we describe how to configure various properties of the SVC system.

10.17.1 Configuring the network


The procedure to set up and configure SVC network interfaces is described in Chapter 4,
“SAN Volume Controller initial configuration” on page 121.

10.17.2 iSCSI configuration


From the iSCSI panel in the Settings menu, you can configure parameters for the system to
connect to iSCSI-attached hosts, as shown in Figure 10-375.

Figure 10-375 iSCSI Configuration

The following parameters can be updated:


򐂰 System Name
It is important to set the system name correctly because it is part of the iSCSI qualified
name (IQN) for the node.

Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.

To change the system name, click the system name and specify the new name.

System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.

򐂰 iSCSI Aliases (Optional)


An iSCSI alias is a user-defined name that identifies the node to the host. Complete the
following steps to change an iSCSI alias:
a. Click an iSCSI alias.
b. Specify a name for it.

848 Implementing the IBM System Storage SAN Volume Controller V7.4
Each node has a unique iSCSI name that is associated with two IP addresses. After the
host starts the iSCSI connection to a target node, this IQN from the target node is visible in
the iSCSI configuration tool on the host.
򐂰 iSNS and CHAP
You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems
use the iSNS server to manage iSCSI targets and for iSCSI discovery.
You can also enable Challenge Handshake Authentication Protocol (CHAP) to
authenticate the system and iSCSI-attached hosts with the specified shared secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system/host definition. You can create an iSCSI host definition without
the use of a CHAP.

10.17.3 Fibre Channel information


As shown in Figure 10-376, you can use the Fibre Channel Connectivity panel to display the
FC connectivity between nodes and other storage systems and hosts that attach through the
FC network. You can filter by selecting one of the following fields:
򐂰 All nodes, storage systems, and hosts
򐂰 Systems
򐂰 Nodes
򐂰 Storage systems
򐂰 Hosts

Figure 10-376 Fibre Channel

10.17.4 Event notifications


The SVC can use Simple Network Management Protocol (SNMP) traps, syslog messages,
and Call Home email to notify you and the IBM Support Center when significant events are
detected. Any combination of these notification methods can be used simultaneously.

Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.

Chapter 10. SAN Volume Controller operations using the GUI 849
10.17.5 Email notifications
The Call Home feature transmits operational and event-related data to you and IBM through a
Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification
email. When configured, this function alerts IBM service personnel about hardware failures
and potentially serious configuration or environmental issues.

Complete the following steps to configure email event notifications:


1. From the SVC System panel, move the mouse pointer over the Settings selection and click
Notifications.
2. In the left column, select Email.
3. Click Enable Notifications, as shown in Figure 10-377.

Figure 10-377 Email event notification

4. A wizard opens, as shown in Figure 10-378. In the Email Event Notifications System
Location window, you must first define the system location information (Company name,
Street address, City, State or province, Postal code, and Country or region). Click Next
after you provide this information.

Figure 10-378 Define the system location

5. In the Contact Details window, you must enter contact information to enable IBM Support
personnel to contact the person in your organization to assist with problem resolution
(Contact name, Email address, Telephone (primary), Telephone (alternate), and Machine
location). Ensure that all contact information is valid and click Next, as shown in
Figure 10-379 on page 851.

850 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-379 Define the company contact information

6. In the Email Event Notifications Email Servers window (Figure 10-380), configure at least
one email server that is used by your site. Enter a valid IP address and a server port for
each server that is added. Ensure that the email servers are valid. Use Ping to verify the
accessibility to your email server.

Figure 10-380 Configure email servers and inventory reporting window

7. As shown in Figure 10-381 on page 852, you can configure email addresses to receive
notifications. We suggest that you configure an email address that belongs to a support
user with the error event notification type that is enabled to notify IBM service personnel if
an error condition occurs on your system. Ensure that all email addresses are valid.
Optionally, enable inventory reporting. To enable inventory reporting, click the rightmost
icon that is shown in Figure 10-381 on page 852. You see Reporting when you hover over
this icon with your mouse.

Chapter 10. SAN Volume Controller operations using the GUI 851
Figure 10-381 Enable event types

8. The last window displays a summary of your Email Event Notifications wizard. Click
Finish to complete the setup. The wizard is now closed. More information was added to
the panel, as shown on Figure 10-382. You can edit or disable email notification from this
window.

Figure 10-382 Email Event Notifications window configured

10.17.6 SNMP notifications


Simple Network Management Protocol (SNMP) is a standard protocol for managing networks
and exchanging messages. The system can send SNMP messages that notify personnel
about an event. You can use an SNMP manager to view the SNMP messages that are sent by
the SVC.

You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (Figure 10-383 on page 853):
򐂰 IP Address
The address for the SNMP server.
򐂰 Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535.

852 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
򐂰 Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Info if you want the user to receive messages about expected events. No action
is required for these events.

Figure 10-383 SNMP configuration

To remove an SNMP server, click the minus sign (-).


To add another SNMP server, click the plus sign (+).

Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event.

You can configure a syslog server to receive log messages from various systems and store
them in a central repository by entering the following information (Figure 10-384 on
page 854):
򐂰 IP Address
The IP address for the syslog server.
򐂰 Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.

Chapter 10. SAN Volume Controller operations using the GUI 853
򐂰 Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
򐂰 Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Info if you want the user to receive messages about expected events. No action
is required for these events.

Figure 10-384 Syslog configuration

To remove a syslog server, click the minus sign (-).


To add another syslog server, click the plus sign (+).

The syslog messages can be sent in concise message format or expanded message format.

Example 10-1 shows a compact format syslog message.

Example 10-1 Compact syslog message example


IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2014 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100

Example 10-2 shows an expanded format syslog message.

Example 10-2 Full format syslog message example


IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2014 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2

854 Implementing the IBM System Storage SAN Volume Controller V7.4
#NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0
(build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234
#AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000#Additional
Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000

10.17.7 System options


Use the System panel to change time and date settings, work with licensing options,
download configuration settings, or download software upgrade packages.

10.17.8 Date and time


Complete the following steps to configure the date and time settings:
1. From the SVC System panel, move the pointer over Settings and click System.
2. In the left column, select Date and Time, as shown in Figure 10-385.

Figure 10-385 Date and Time window

3. From this panel, you can modify the following information:


– Time zone
Select a time zone for your system by using the drop-down list.
– Date and time
The following options are available:
• If you are not using a Network Time Protocol (NTP) server, select Set Date and
Time, and then manually enter the date and time for your system, as shown in
Figure 10-386 on page 856. You can also click Use Browser Settings to
automatically adjust the date and time of your SVC system with your local
workstation date and time.

Chapter 10. SAN Volume Controller operations using the GUI 855
Figure 10-386 Set Date and Time window

• If you are using a Network Time Protocol (NTP) server, select Set NTP Server IP
Address and then enter the IP address of the NTP server, as shown in
Figure 10-387.

Figure 10-387 Set NTP Server IP Address window

4. Click Save.

10.17.9 Licensing
Complete the following steps to configure the licensing settings:
1. From the SVC Settings panel, move the pointer over Settings and click System.
2. In the left column, select Licensing, as shown in Figure 10-388.

Figure 10-388 Licensing window

856 Implementing the IBM System Storage SAN Volume Controller V7.4
3. In the Select Your License section, you can choose between the following licensing
options for your SVC system:
– Standard Edition: Select the number of terabytes that are available for your license for
virtualization and for Copy Services functions for this license option.
– Entry Edition: This type of licensing is based on the number of the physical disks that
you are virtualizing and whether you selected to license the FlashCopy function, the
Metro Mirror and Global Mirror function, or both.
4. Set the licensing options for the SVC for the following elements:
– Virtualization Limit
Enter the capacity of the storage that will be virtualized by this system.
– FlashCopy Limit
Enter the capacity that is available for FlashCopy mappings.

Important: The Used capacity for FlashCopy mapping is the sum of all of the
volumes that are the source volumes of a FlashCopy mapping.

– Global and Metro Mirror Limit


Enter the capacity that is available for Metro Mirror and Global Mirror relationships.

Important: The Used capacity for Global Mirror and Metro Mirror is the sum of the
capacities of all of the volumes that are in a Metro Mirror or Global Mirror
relationship; both master volumes and auxiliary volumes are included.

– Real-time Compression Limit


Enter the total number of terabytes of virtual capacity that are licensed for
compression.
– Virtualization Limit (Entry Edition only)
Enter the total number of physical drives that you are authorized for virtualization.

10.17.10 Setting GUI preferences


Complete the following steps to configure the GUI preferences:
1. From the SVC Settings window, move the pointer over Settings and click GUI Preferences
(Figure 10-389 on page 858).

Chapter 10. SAN Volume Controller operations using the GUI 857
Figure 10-389 GUI Preferences window

2. You can configure the following elements:


– Refresh GUI Objects
This option causes the GUI to refresh all of its views and clears the GUI cache. The
GUI looks up every object again.

Important: This option is for an IBM Support action only.

– Restore Default Browser Preferences


This option deletes all GUI preferences that are stored in the browser and restores the
default preferences.
– Information Center
You can change the URL of the IBM SAN Volume Controller Knowledge Center, as
shown in Figure 10-390.

Figure 10-390 Change the IBM SAN Volume Controller Knowledge Center URL

– Extent Sizes enable you to select the extent size during storage pool creation.
– The Accessibility option enables low graphic mode when the system is connected
through a slower network.

10.18 Upgrading the SAN Volume Controller software


In this section, we describe the operations to upgrade your SVC software from version 7.3 to
version 7.4.

858 Implementing the IBM System Storage SAN Volume Controller V7.4
Note: Later, we show the process when you upgrade from 7.4.0.x to 7.4.0.y.

The format for the software upgrade package name ends in four positive integers that are
separated by dots. For example, a software upgrade package might have the name that is
shown in the following example:
IBM_2145_INSTALL_7.4.0.0

10.18.1 Precautions before the upgrade


In this section, we describe the precautions that you need to take before you attempt an
upgrade.

Important: Before you attempt any SVC code update, read and understand the SVC
concurrent compatibility and code cross-reference matrix. For more information, see the
following website and click Latest IBM SAN Volume Controller code:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707

During the upgrade, each node in your SVC clustered system is automatically shut down and
restarted by the upgrade process. Because each node in an I/O Group provides an
alternative path to volumes, use the Subsystem Device Driver (SDD) to ensure that all I/O
paths between all hosts and SANs work.

If you do not perform this check, certain hosts might lose connectivity to their volumes and
experience I/O errors when SVC node that provides that access is shut down during the
upgrade process. You can check the I/O paths by using SDD datapath query commands.

10.18.2 SAN Volume Controller upgrade test utility


The SVC software upgrade test utility is an SVC software utility that checks for known issues
that can cause problems during an SVC software upgrade. The SVC software upgrade test
utility is available at this website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

You can use the svcupgradetest utility to check for known issues that might cause problems
during an SVC software upgrade.

The software upgrade test utility can be downloaded in advance of the upgrade process, or it
can be downloaded and run directly during the software upgrade, as guided by the upgrade
wizard.

You can run the utility multiple times on the same SVC system to perform a readiness
check-in preparation for a software upgrade. We strongly advise that you run this utility for a
final time immediately before you apply the SVC upgrade to ensure that no new releases of
the utility were available since you originally downloaded it.

The installation and use of this utility is nondisruptive and the utility does not require the
restart of any SVC node; therefore, host I/O is not interrupted. The utility is only installed on
the current configuration node.

Chapter 10. SAN Volume Controller operations using the GUI 859
System administrators must continue to check whether the version of code that they plan to
install is the latest version. You can obtain the latest information at this website:
https://ibm.biz/BdE8Pe

This utility is intended to supplement rather than duplicate the existing tests that are
performed by the SVC upgrade procedure (for example, checking for unfixed errors in the
error log).

10.18.3 Upgrade procedure from version 7.3.x.x to version 7.4.x.x


To upgrade the SVC system software from version 7.3.x.x to version 7.4.x.x, complete the
following steps:
1. Log in with your administrator user ID and password. The SVC management home page
opens. Click Settings → System → Upgrade Software.
2. The window that is shown in Figure 10-391 opens.

Figure 10-391 Upgrade Software

From the window that is shown in Figure 10-391, you can select the following options:
– Check for updates: Use this option to check, on the IBM website, whether an SVC
software version is available that is newer than the version that you installed on your
SVC. You need an Internet connection to perform this check.
– Launch Upgrade Wizard: Use this option to start the software upgrade process.
3. Click Launch Upgrade Wizard to start the upgrade process. The window that is shown in
Figure 10-392 opens.

Figure 10-392 Upgrade Package test utility

860 Implementing the IBM System Storage SAN Volume Controller V7.4
From the Upgrade Package window, download the upgrade test utility from the IBM
website. Click Browse to upload it from the local disk. When the upgrade test utility is
uploaded, the window that is shown in Figure 10-393 opens.

Figure 10-393 Upload upgrade test utility completed

4. When you click Next, the upgrade test utility is installed. The window that is shown in
Figure 10-394 opens. Click Close.

Figure 10-394 Upgrade test utility applied

5. The window that is shown in Figure 10-395 on page 862 opens. From this window, you
can run your upgrade test utility for the level that you need. Enter the version to which you
want to upgrade and the upgrade test utility checks the system to ensure that the system
is ready for an upgrade to this version.

Chapter 10. SAN Volume Controller operations using the GUI 861
Figure 10-395 Run upgrade test utility

6. Click Next. The upgrade test utility now runs. You see the suggested actions (if any
actions are needed) or the window that is shown in Figure 10-396.

Figure 10-396 Upgrade test utility result

7. In our case (Figure 10-397), we got one warning (system was not configured to send
emails) and no errors. In the case of an error, you must fix the problem before you can
proceed.

Figure 10-397 Upgrade utility detects no errors

8. Click Next to start the SVC software upload procedure. The Upgrade Package window
that is shown in Figure 10-398 on page 863 opens.

862 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-398 Upgrade Package window

9. From the window that is shown in Figure 10-398, either download the SVC upgrade
package directly from the IBM website, or locate and upload the software upgrade
package from your disk.
10.Click Next, and you see the windows that are shown in Figure 10-399 and Figure 10-400.

Figure 10-399 Uploading the SVC software upgrade package

Figure 10-400 shows that the SVC package upload completed.

Figure 10-400 Upload of the SVC software package complete

11.Click Next and you see the window that is shown in Figure 10-401 on page 864. The
definitions for automatically and manually are explained:
– Updating the system automatically
During the automatic update process, each node in a system is updated one at a time,
and the new code is staged on the nodes. While each node restarts, degradation in the
maximum I/O rate that can be sustained by the system might occur. After all the nodes
in the system are successfully restarted with the new code level, the new level is
automatically committed.

Chapter 10. SAN Volume Controller operations using the GUI 863
During an automatic code update, each node of a working pair is updated sequentially.
The node that is being updated is temporarily unavailable and all I/O operations to that
node fail. As a result, the I/O error counts increase and the failed I/O operations are
directed to the partner node of the working pair. Applications do not see any I/O
failures. When new nodes are added to the system, the update package is
automatically downloaded to the new nodes from the SVC system.
The update can normally be done concurrently with typical user I/O operations.
However, a possibility exists that performance can be affected but this situation
depends on the environment. If any restrictions apply to the operations that can be
done during the update, these restrictions are documented on the product website that
you use to download the update packages. During the update procedure, most of the
configuration commands are not available.
– Updating the system manually
During an automatic update procedure, the system updates each of the nodes
systematically. The automatic method is the preferred procedure for updating the code
on nodes; however, to provide more flexibility in the update process, you can also
update each node manually.
During this manual procedure, you prepare the update, remove a node from the
system, update the code on the node, and return the node to the system. You repeat
this process for the remaining nodes until the last node is removed from the system.
Every node must be updated to the same code level. You cannot interrupt the update
and switch to installing a different level.
After all of the nodes are updated, you must confirm the update to complete the
process. The confirmation restarts each node in order and takes about 30 minutes to
complete.
12.Select Automatic upgrade, which is fully controlled by the system.

Figure 10-401 System ready for the upgrade

13.When you click Finish, the SVC software upgrade starts. The window that is shown in
Figure 10-402 on page 865 opens.

864 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-402 Upgrading a node

14.When you click OK, you complete the process to upgrade the SVC software. Now, you see
the window that is shown in Figure 10-403.

Figure 10-403 Upgrade in progress

One node starts the upgrade, as shown in Figure 10-404.

Figure 10-404 Upgrade in progress

Chapter 10. SAN Volume Controller operations using the GUI 865
After a few minutes, the window that is shown in Figure 10-405 opens, which shows that
the node was upgraded.

Figure 10-405 One node is upgraded

The second node is shown as upgrading in Figure 10-406.

Figure 10-406 One node is ready and the second node is updating

15.The new SVC software version is installed on the remaining node in the system. You can
check the upgrade status, as shown in Figure 10-407.

Figure 10-407 Upgrade completed for both nodes

16.Now, you must commit and confirm that the code is correct and that the upgrade was
successful. Click Confirm Update, as shown in Figure 10-408 on page 867.

866 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-408 Confirm the update

17.The system performs the confirmation process on both nodes, as illustrated in


Figure 10-409.

Figure 10-409 Update system confirmation

After the confirmation process finishes, the SVC software upgrade task is complete.

10.18.4 Upgrade procedure from version 7.4.x.x to 7.4.y.y


To upgrade the SVC system software from version 7.4.x.x to version 7.4.y.y, complete the
following steps:
1. Log in with your administrator user ID and password. The SVC management home page
opens. Click Settings → System → Upgrade Software.
2. The Update System window that is shown in Figure 10-410 on page 868 opens.

Chapter 10. SAN Volume Controller operations using the GUI 867
Figure 10-410 Upgrade Software

From the window that is shown in Figure 10-410, you can only select the following option:
– Update: This option opens the firmware selection window.

Important: The actual firmware must already be downloaded. You can obtain the
actual firmware at this website:
https://ibm.biz/BdE8Pe

3. When you click Update, the Update System selection panel opens, as shown in
Figure 10-411.

Figure 10-411 Firmware selection panel

4. If all selected firmware is valid, you see Figure 10-412 on page 869. The test utility,
package information, code level, and Update option change color. Click Update to
proceed.

868 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-412 Validated firmware package

5. The selection window opens. Choose either to update the system automatically or
manually. The differences are explained:
– Updating the system automatically
During the automatic update process, each node in a system is updated one at a time,
and the new code is staged on the nodes. While each node restarts, degradation in the
maximum I/O rate that can be sustained by the system can occur. After all the nodes in
the system are successfully restarted with the new code level, the new level is
automatically committed.
During an automatic code update, each node of a working pair is updated sequentially.
The node that is being updated is temporarily unavailable and all I/O operations to that
node fail. As a result, the I/O error counts increase and the failed I/O operations are
directed to the partner node of the working pair. Applications do not see any I/O
failures. When new nodes are added to the system, the update package is
automatically downloaded to the new nodes from the SVC system.
The update can normally be done concurrently with typical user I/O operations.
However, performance might be affected. If any restrictions apply to the operations that
can be done during the update, these restrictions are documented on the product
website that you use to download the update packages. During the update procedure,
most configuration commands are not available.
– Updating the system manually
During an automatic update procedure, the system updates each of the nodes
systematically. The automatic method is the preferred procedure for updating the code
on nodes; however, to provide more flexibility in the update process, you can also
update each node manually.
During this manual procedure, you prepare the update, remove a node from the
system, update the code on the node, and return the node to the system. You repeat
this process for the remaining nodes until the last node is removed from the system.
Every node must be updated to the same code level. You cannot interrupt the update
and switch to installing a different level.

Chapter 10. SAN Volume Controller operations using the GUI 869
After all of the nodes are updated, you must confirm the update to complete the
process. The confirmation restarts each node in order and takes about 30 minutes to
complete.
6. In Figure 10-413, we select Automatic update.

Figure 10-413 Select the type of update

7. When you click Finish, the SVC software upgrade starts. The window that is shown in
Figure 10-414 opens. The system starts with the upload of the test utility and the SVC
system firmware.

Figure 10-414 Uploading the test utility and the SVC system firmware

8. After a while, the system starts automatically to run the update test utility.

870 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-415 Running the update test utility

9. When the system detects an issue or an error, you are guided by the GUI. Click Read
more, as shown in Figure 10-416.

Figure 10-416 Issues that are detected by the update test utility

10.The Update Test Utility Results panel opens and describes the results, as shown in
Figure 10-417.

Figure 10-417 Description of the warning from the test utility

Chapter 10. SAN Volume Controller operations using the GUI 871
11.In our case, we received a warning because we did not enable email notification. So, we
can click Close and proceed with the update. As shown in Figure 10-418, we click
Resume.

Figure 10-418 Resume the installation of the SVC firmware

12.Due to the warning about our not setting up email notification, another warning appears,
as shown in Figure 10-419. We proceed and click Yes.

Figure 10-419 Warning before you can continue

13.The update process starts, as shown in Figure 10-420.

Figure 10-420 Update process starts

14.When the update for the first node is complete, the system paused for approximately 30
minutes to ensure that all paths are reestablished to the now updated node (Figure 10-421
on page 873).

872 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure 10-421 System paused to reestablish the paths

15.After a few minutes, a node failover happens and closes the actual web session. Click Yes
to reestablish the web session, as shown in Figure 10-422.

Figure 10-422 Node failover

16.After a refresh, you can see that the system is updated.

Figure 10-423 System is updated

The update is now complete.

Chapter 10. SAN Volume Controller operations using the GUI 873
874 Implementing the IBM System Storage SAN Volume Controller V7.4
A

Appendix A. Performance data and statistics


gathering
In this appendix, we provide a brief overview of the performance analysis capabilities of the
IBM System Storage SAN Volume Controller (SVC) 7.4. We also describe a method that you
can use to collect and process SVC performance statistics.

It is beyond the intended scope of this book to provide an in-depth understanding of


performance statistics or explain how to interpret them. For more information about the
performance of the SVC, see SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

© Copyright IBM Corp. 2015. All rights reserved. 875


SAN Volume Controller performance overview
Although storage virtualization with the SVC provides many administrative benefits, it can
also provide a substantial increase in performance for various workloads. The caching
capability of the SVC and its ability to stripe volumes across multiple disk arrays can provide a
significant performance improvement over what can otherwise be achieved when midrange
disk subsystems are used.

To ensure that the performance levels of your system that you want are maintained, monitor
performance periodically to provide visibility to potential problems that exist or are developing
so that they can be addressed in a timely manner.

Performance considerations
When you are designing an SVC storage infrastructure or maintaining an existing
infrastructure, you must consider many factors in terms of their potential effect on
performance. These factors include, but are not limited to dissimilar workloads competing for
the same resources, overloaded resources, insufficient available resources, poor performing
resources, and similar performance constraints.

Remember the following high-level rules when you are designing your storage area network
(SAN) and SVC layout:
򐂰 Host-to-SVC inter-switch link (ISL) oversubscription
This area is the most significant I/O load across ISLs. The recommendation is to maintain
a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to lead to I/O
bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on
the edges and the SVC is the core.
򐂰 Storage-to-SVC ISL oversubscription
This area is the second most significant I/O load across ISLs. The maximum
oversubscription is 7-to-1. A higher ratio is not supported. Again, this suggestion assumes
a multiple-switch SAN fabric design.
򐂰 Node-to-node ISL oversubscription
This area is the least significant load of the three possible oversubscription bottlenecks. In
standard setups, this load can be ignored. Although this area is not entirely negligible, it
does not contribute significantly to the ISL load. However, node-to-node ISL
oversubscription is mentioned here in relation to the split-cluster capability that was made
available with version 6.3. When the system is running in this manner, the number of ISL
links becomes more important. As with the storage-to-SVC ISL oversubscription, this load
also requires a maximum of 7-to-1 oversubscription. Exercise caution and careful planning
when you determine the number of ISLs to implement. If you need assistance, we
recommend that you contact your IBM representative and request technical assistance.
򐂰 ISL trunking/port channeling
For the best performance and availability, we highly recommend that you use ISL trunking
or port channeling. Independent ISL links can easily become overloaded and turn into
performance bottlenecks. Bonded or trunked ISLs automatically share load and provide
better redundancy in a failure.

876 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Number of paths per host multipath device
The maximum supported number of paths per multipath device that is visible on the host is
eight. Although the Subsystem Device Driver Path Control Module (SDDPCM), related
products, and most vendor multipathing software can support more paths, the SVC
expects a maximum of eight paths. In general, you see only an effect on performance from
more paths than eight. Although the SVC can work with more than eight paths, this design
is technically unsupported.
򐂰 Do not intermix dissimilar array types or sizes
Although the SVC supports an intermix of differing storage within storage pools, it is best
to always use the same array model, RAID mode, RAID size (RAID 5 6+P+S does not mix
well with RAID 6 14+2), and drive speeds.

Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can provide a validation that design expectations are met and identify opportunities for
improvement.

SAN Volume Controller performance perspectives


The SVC is a combination product that consists of software and hardware. The software was
developed by the IBM Research Group and was designed to run on commodity hardware
(mass-produced Intel-based CPUs with mass-produced expansion cards) and to provide
distributed cache and a scalable cluster architecture. One of the main goals of this design
was to use refreshes in hardware. Currently, the SVC cluster is scalable up to eight nodes
and these nodes can be swapped for newer hardware while online. This capability provides a
great investment value because the nodes are relatively inexpensive and a node swap can be
done online. This capability provides an instant performance boost with no license changes.
Newer nodes, such as 2145-CG8 or 2145-DH8 models, which dramatically increased cache
8 GB - 64 GB per node, provide an extra benefit on top of the typical refresh cycle. For more
information about the node replacement and swap and instructions about adding nodes, see
this website:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437

The performance is near linear when nodes are added into the cluster until performance
eventually becomes limited by the attached components. Also, although virtualization with the
SVC provides significant flexibility in terms of the components that are used, it does not
diminish the necessity of designing the system around the components so that it can deliver
the level of performance that you want.

The key item for planning is your SAN layout. Switch vendors have slightly different planning
requirements, but the end goal is that you always want to maximize the bandwidth that is
available to the SVC ports. The SVC is one of the few devices that can drive ports to their
limits on average, so it is imperative that you put significant thought into planning the SAN
layout.

Essentially, the SVC performance improvements are gained by spreading the workload
across a greater number of back-end resources and more caching that are provided by the
SVC cluster. However, the performance of individual resources eventually becomes the
limiting factor.

Appendix A. Performance data and statistics gathering 877


Performance monitoring
In this section, we highlight several performance monitoring techniques.

Collecting performance statistics


The SVC is constantly collecting performance statistics. The default frequency by which files
are created is 5-minute intervals. Before version 4.3.0, the default was 15-minute intervals,
with a supported range of 15 - 60 minutes. The collection interval can be changed by using
the startstats command.

The statistics files (VDisk, MDisk, and Node) are saved at the end of the sampling interval
and a maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion.
This design provides statistics for the most recent 80-minute period if the default 5-minute
sampling interval is used. The SVC supports user-defined sampling intervals of 1 - 60
minutes.

The maximum space that is required for a performance statistics file is 1,153,482 bytes. Up to
128 (16 per each of the three types across eight nodes) different files can exist across eight
SVC nodes. This design makes the total space requirement a maximum of 147,645,694 bytes
for all performance statistics from all nodes in an SVC cluster.

Note: Remember this maximum of 147,645,694 bytes for all performance statistics from all
nodes in an SVC cluster when you are in time-critical situations. The required size is not
otherwise important because SVC node hardware can map the space.

You can define the sampling interval by using the startstats -interval 2 command to
collect statistics at 2-minute intervals. For more information, see 9.9.7, “Starting statistics
collection” on page 556.

Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within the SVC, they shorten the amount of time that the historical
data is available on the SVC. For example, instead of an 80-minute period of data with the
default five-minute interval, if you adjust to 2-minute intervals, you have a 32-minute period
instead.

Since SVC 5.1.0, cluster-level statistics are no longer supported. Instead, use the per node
statistics that are collected. The sampling of the internal performance counters is coordinated
across the cluster so that when a sample is taken, all nodes sample their internal counters at
the same time. It is important to collect all files from all nodes for a complete analysis. Tools,
such as Tivoli Storage Productivity Center, perform this intensive data collection for you.

Statistics file naming


The statistics files that are generated are written to the /dumps/iostats/ directory. The file
name is in the following formats:
򐂰 Nm_stats_<node_frontpanel_id>_<date>_<time> for managed disk (MDisk) statistics
򐂰 Nv_stats_<node_frontpanel_id>_<date>_<time> for virtual disks (VDisks) statistics
򐂰 Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics
򐂰 Nd_stats_<node_frontpanel_id>_<date>_<time> for disk drive statistics, not used for the
SVC

878 Implementing the IBM System Storage SAN Volume Controller V7.4
The node_frontpanel_id is of the node on which the statistics were collected. The date is in
the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an
MDisk statistics file name:

Nm_stats_113986_141031_214932

Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.

Example A-1 File names of per node statistics


IBM_2145:ITSO_SVC3:superuser>svcinfo lsiostatsdumps
id iostat_filename
1 Nd_stats_113986_141031_214932
2 Nv_stats_113986_141031_214932
3 Nv_stats_113986_141031_215132
4 Nd_stats_113986_141031_215132
5 Nd_stats_113986_141031_215332
6 Nv_stats_113986_141031_215332
7 Nv_stats_113986_141031_215532
8 Nd_stats_113986_141031_215532
9 Nv_stats_113986_141031_215732
10 Nd_stats_113986_141031_215732
11 Nv_stats_113986_141031_215932
12 Nd_stats_113986_141031_215932
13 Nm_stats_113986_141031_215932

Tip: The performance statistics files can be copied from the SVC nodes to a local drive on
your workstation by using the pscp.exe (included with PuTTY) from an MS-DOS command
line, as shown in this example:
C:\Program Files\PuTTY>pscp -unsafe -load ITSO_SVC3
admin@10.18.229.81:/dumps/iostats/* c:\statsfiles

Use the -load parameter to specify the session that is defined in PuTTY.

Specify the -unsafe parameter when you use wildcards.

You can obtain PuTTY at this website:


http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

qperf
qperf is an unofficial (no-charge and unsupported) collection of awk scripts. qperf was made
available for download from IBM Techdocs. It was written by Christian Karpp. qperf is
designed to provide a quick performance overview by using the command-line interface (CLI)
and a UNIX Korn shell. (It can also be used with Cygwin.)

qperf is available for download from this website:


http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105947

svcmon
svcmon is not longer available.

The performance statistics files are in .xml format. They can be manipulated by using various
tools and techniques. Figure A-1 on page 880 shows an example of the type of chart that you
can produce by using the SVC performance statistics.

Appendix A. Performance data and statistics gathering 879


Figure A-1 Spreadsheet example

Real-time performance monitoring


Starting with version 6.2.0, the SVC supports real-time performance monitoring. Real-time
performance statistics provide short-term status information for the SVC. The statistics are
shown as graphs in the management GUI or can be viewed from the CLI. With system-level
statistics, you can quickly view the CPU usage and the bandwidth of volumes, interfaces, and
MDisks. Each graph displays the current bandwidth in megabytes per second (MBps) or I/Os
per second (IOPS), and a view of bandwidth over time.

Each node collects various performance statistics, mostly at 5-second intervals, and the
statistics that are available from the config node in a clustered environment. This information
can help you determine the performance effect of a specific node. As with system statistics,
node statistics help you to evaluate whether the node is operating within normal performance
metrics.

Real-time performance monitoring gathers the following system-level performance statistics:


򐂰 CPU utilization
򐂰 Port utilization and I/O rates
򐂰 Volume and MDisk I/O rates
򐂰 Bandwidth
򐂰 Latency

Real-time statistics are not a configurable option and cannot be disabled.

Real-time performance monitoring with the CLI


The lsnodestats and lssystemstats commands are available for monitoring the statistics
through the CLI. Next, we show you examples of how to use them.

880 Implementing the IBM System Storage SAN Volume Controller V7.4
The lsnodestats command provides performance statistics for the nodes that are part of a
clustered system, as shown in Example A-2 (the output is truncated and shows only part of
the available statistics). You can also specify a node name in the command to limit the output
for a specific node.

Example A-2 lsnodestats command output


IBM_2145:ITSO_SVC3:admin>lsnodestats
node_id node_name stat_name stat_current stat_peak stat_peak_time
1 Node 1 compression_cpu_pc 0 0 141031225017
1 Node 1 cpu_pc 2 2 141031225017
1 Node 1 fc_mb 0 9 141031224722
1 Node 1 fc_io 1086 1089 141031224857
1 Node 1 sas_mb 0 0 141031225017
1 Node 1 sas_io 0 0 141031225017
1 Node 1 iscsi_mb 0 0 141031225017
1 Node 1 iscsi_io 0 0 141031225017
1 Node 1 write_cache_pc 0 0 141031225017
1 Node 1 total_cache_pc 0 0 141031225017
1 Node 1 vdisk_mb 0 0 141031225017
1 Node 1 vdisk_io 0 0 141031225017
1 Node 1 vdisk_ms 0 0 141031225017
1 Node 1 mdisk_mb 0 9 141031224722
1 Node 1 mdisk_io 0 66 141031224722
1 Node 1 mdisk_ms 0 5 141031224722
1 Node 1 drive_mb 0 0 141031225017
1 Node 1 drive_io 0 0 141031225017
1 Node 1 drive_ms 0 0 141031225017
1 Node 1 vdisk_r_mb 0 0 141031225017
.....
2 Node 2 compression_cpu_pc 0 0 141031225016
2 Node 2 cpu_pc 0 1 141031225006
2 Node 2 fc_mb 0 0 141031225016
2 Node 2 fc_io 1029 1051 141031224806
2 Node 2 sas_mb 0 0 141031225016
2 Node 2 sas_io 0 0 141031225016
2 Node 2 iscsi_mb 0 0 141031225016
2 Node 2 iscsi_io 0 0 141031225016
2 Node 2 write_cache_pc 0 0 141031225016
2 Node 2 total_cache_pc 0 0 141031225016
2 Node 2 vdisk_mb 0 0 141031225016
2 Node 2 vdisk_io 0 0 141031225016
2 Node 2 vdisk_ms 0 0 141031225016
2 Node 2 mdisk_mb 0 0 141031225016
2 Node 2 mdisk_io 0 1 141031224941
2 Node 2 mdisk_ms 0 20 141031224741
2 Node 2 drive_mb 0 0 141031225016
2 Node 2 drive_io 0 0 141031225016
2 Node 2 drive_ms 0 0 141031225016
2 Node 2 vdisk_r_mb 0 0 141031225016
...

Appendix A. Performance data and statistics gathering 881


The previous example shows statistics for the two node members of cluster ITSO_SVC3. For
each node, the following columns are displayed:
򐂰 stat_name: Provides the name of the statistic field
򐂰 stat_current: The current value of the statistic field
򐂰 stat_peak: The peak value of the statistic field in the last 5 minutes
򐂰 stat_peak_time: The time that the peak occurred

On the other side, the lssystemstats command lists the same set of statistics that is listed
with the lsnodestats command, but representing all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
򐂰 Bandwidth: Sum of bandwidth of all nodes
򐂰 Latency: Average latency for the cluster, which is calculated by using data from the whole
cluster, not an average of the single node values
򐂰 IOPS: Total IOPS of all nodes
򐂰 CPU percentage: Average CPU percentage of all nodes

Example A-3 shows the resulting output of the lssystemstats command.

Example A-3 lssystemstats command output


IBM_2145:ITSO_SVC3:admin>lssystemstats
stat_name stat_current stat_peak stat_peak_time
compression_cpu_pc 0 0 141031230031
cpu_pc 0 1 141031230021
fc_mb 0 9 141031225721
fc_io 1942 2175 141031225836
sas_mb 0 0 141031230031
sas_io 0 0 141031230031
iscsi_mb 0 0 141031230031
iscsi_io 0 0 141031230031
write_cache_pc 0 0 141031230031
total_cache_pc 0 0 141031230031
vdisk_mb 0 0 141031230031
vdisk_io 0 0 141031230031
vdisk_ms 0 0 141031230031
mdisk_mb 0 9 141031225721
...

Table A-1 has a brief description of each of the statistics that are presented by the
lssystemstats and lsnodestats commands.

Table A-1 lssystemstats and lsnodestats statistics field name descriptions


Field name Unit Description

cpu_pc Percentage Utilization of node CPUs.

fc_mb MBps Fibre Channel bandwidth.

fc_io IOPS Fibre Channel throughput.

sas_mb MBps Serial-attached SCSI (SAS) bandwidth.

sas_io IOPS SAS throughput.

882 Implementing the IBM System Storage SAN Volume Controller V7.4
Field name Unit Description

iscsi_mb MBps Internet Small Computer System Interface (iSCSI)


bandwidth.

iscsi_io IOPS iSCSI throughput.

write_cache_pc Percentage Write cache fullness. Updated every 10 seconds.

total_cache_pc Percentage Total cache fullness. Updated every 10 seconds.

vdisk_mb MBps Total VDisk bandwidth.

vdisk_io IOPS Total VDisk throughput.

vdisk_ms Milliseconds Average VDisk latency.

mdisk_mb MBps MDisk (SAN and RAID) bandwidth.

mdisk_io IOPS MDisk (SAN and RAID) throughput.

mdisk_ms Milliseconds Average MDisk latency.

drive_mb MBps Drive bandwidth.

drive_io IOPS Drive throughput.

drive_ms Milliseconds Average drive latency.

vdisk_w_mb MBps VDisk write bandwidth.

vdisk_w_io IOPS VDisk write throughput.

vdisk_w_ms Milliseconds Average VDisk write latency.

mdisk_w_mb MBps MDisk (SAN and RAID) write bandwidth.

mdisk_w_io IOPS MDisk (SAN and RAID) write throughput.

mdisk_w_ms Milliseconds Average MDisk write latency.

drive_w_mb MBps Drive write bandwidth.

drive_w_io IOPS Drive write throughput.

drive_w_ms Milliseconds Average drive write latency.

vdisk_r_mb MBps VDisk read bandwidth.

vdisk_r_io IOPS VDisk read throughput.

vdisk_r_ms Milliseconds Average VDisk read latency.

mdisk_r_mb MBps MDisk (SAN and RAID) read bandwidth.

mdisk_r_io IOPS MDisk (SAN and RAID) read throughput.

mdisk_r_ms Milliseconds Average MDisk read latency.

drive_r_mb MBps Drive read bandwidth.

drive_r_io IOPS Drive read throughput.

drive_r_ms Milliseconds Average drive read latency.

Appendix A. Performance data and statistics gathering 883


Real-time performance monitoring with the GUI
The real-time statistics are also available from the SVC GUI. Click Monitoring →
Performance (as shown in Figure A-2) to open the performance Monitoring window.

Figure A-2 IBM SAN Volume Controller Monitoring menu

As shown in Figure A-3 on page 885, the Performance monitoring window is divided into the
following sections that provide utilization views for the following resources:
򐂰 CPU Utilization: Shows the overall CPU usage percentage.
򐂰 Volumes: Shows the overall volume utilization with the following fields:
– Read
– Write
– Read latency
– Write latency
򐂰 Interfaces: Shows the overall statistics for each of the available interfaces:
– Fibre Channel
– iSCSI
– SAS
– IP Remote Copy
򐂰 MDisks: Shows the following overall statistics for the MDisks:
– Read
– Write
– Read latency
– Write latency

884 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure A-3 Performance monitoring window

You can also select to view performance statistics for each of the available nodes of the
system, as shown in Figure A-4.

Figure A-4 Select a system node

You can also change the metric between MBps or IOPS, as shown in Figure A-5.

Figure A-5 Changing to MBps or IOPS

On any of these views, you can select any point with your cursor to know the exact value and
when it occurred. When you place your cursor over the timeline, it becomes a dotted line with
the various values gathered, as shown in Figure A-6 on page 886.

Appendix A. Performance data and statistics gathering 885


Figure A-6 Detailed resource utilization

For each of the resources, various values exist that you can view by selecting the value. For
example, as shown in Figure A-7, the four available fields are selected for the MDisks view:
Read, Write, Read latency, and Write latency. In our example, Read is not selected.

Figure A-7 Detailed resource utilization

Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, the use of .xml files is a
less practical and more complicated method to analyze the SVC performance statistics. Tivoli
Storage Productivity Center for Disk is the supported IBM tool to collect and analyze SVC
performance statistics.

Tivoli Storage Productivity Center for Disk is installed separately on a dedicated system and it
is not part of the SVC software bundle.

For more information about the use of Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open

886 Implementing the IBM System Storage SAN Volume Controller V7.4
SVC port quality statistics: Tivoli Storage Productivity Center for Disk Version 4.2.1
supports the SVC port quality statistics that are provided in SVC versions 4.3 and later.

Monitoring these metrics and the performance metrics can help you to maintain a stable
SAN environment.

Appendix A. Performance data and statistics gathering 887


888 Implementing the IBM System Storage SAN Volume Controller V7.4
B

Appendix B. Terminology
In this appendix, we define the IBM System Storage SAN Volume Controller (SVC) terms that
are commonly used in this book.

To see the complete set of terms that relate to the SAN Volume Controller, see the Glossary
section of the IBM SAN Volume Controller Knowledge Center, which is available at this
website:
http://www.ibm.com/support/knowledgecenter/STPVGU/landing/SVC_welcome.html

© Copyright IBM Corp. 2015. All rights reserved. 889


Commonly encountered terms
This appendix includes the following SVC terminology.

Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization engine is
outside the data path and performs a metadata-style service. The metadata server contains
all the mapping and locking tables, and the storage devices contain only data. See also
“Symmetric virtualization” on page 900.

Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to the
application as soon as the write operation is made to the source volume. Later, the write
operation is made to the target volume. See also “Synchronous replication” on page 900.

Automatic data placement mode


Automatic data placement mode is an Easy Tier operating mode in which the host activity on
all the volume extents in a pool are “measured”, a migration plan is created, and then
automatic extent migration is performed.

Back end
See “Front end and back end” on page 894.

Caching I/O Group


The caching I/O Group is the I/O Group in the system that performs the cache function for a
volume.

Call home
Call home is a communication link that is established between a product and a service
provider. The product can use this link to call IBM or another service provider when the
product requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.

Canister
A canister is a single processing unit within a storage system.

Capacity licensing
Capacity licensing is a licensing model that licenses features with a price-per-terabyte model.
Licensed features are FlashCopy, Metro Mirror, Global Mirror, and virtualization. See also
“FlashCopy” on page 893, “Metro Mirror” on page 896, and “Virtualization” on page 900.

Channel extender
A channel extender is a device that is used for long-distance communication that connects
other storage area network (SAN) fabric components. Generally, channel extenders can
involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or
another long-distance communication protocol.

Child pool
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes. Instead of being created directly from managed disks (MDisks), child pools

890 Implementing the IBM System Storage SAN Volume Controller V7.4
are created from existing capacity that is allocated to a parent pool. As with parent pools,
volumes can be created that specifically use the capacity that is allocated to the child pool.
Child pools are similar to parent pools with similar properties. Child pools can be used for
volume copy operation. Also, see “Parent pool” on page 897.

Clustered system (SAN Volume Controller)


A clustered system, which was known as a cluster, is a group of up to eight SVC nodes that
presents a single configuration, management, and service interface to the user.

Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if it is moved
from a hard disk drive (HDD) to a Flash disk. A cold extent also refers to an extent that needs
to be migrated onto an HDD if it is on a Flash disk drive.

Compression
Compression is a function that removes repetitive characters, spaces, strings of characters,
or binary data from the data that is being processed and replaces characters with control
characters. Compression reduces the amount of storage space that is required for data.

Compression accelerator
A compression accelerator is hardware onto which the work of compression is offloaded from
the microprocessor.

Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the data that describes the
clustered-system configuration and provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.

Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets
that are maintained with the same time reference so that all copies are consistent in time. A
Consistency Group can be managed as a single entity.

Container
A container is a software object that holds or organizes other software objects or entities.

Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete and
the target disk has no further dependency on the source disk. The time of the last trigger
event is normally displayed with this status.

Appendix B. Terminology 891


Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SVC
nodes are typically connected to a “redundant SAN” that is made up of two counterpart SANs.
A counterpart SAN is often called a SAN fabric.

Data consistency
Data consistency is a characteristic of the data at the target site where the dependent write
order is maintained to guarantee the recoverability of applications.

Data migration
Data migration is the movement of data from one physical location to another physical
location without the disruption of application I/O operations.

Directed Maintenance Procedures


The fix procedures, which are also known as Directed Maintenance Procedures (DMPs),
ensure that you fix any outstanding errors in the error log. To fix errors, from the Monitoring
panel, click Events. The Next Recommended Action is displayed at the top of the Events
window. Select Run This Fix Procedure and follow the instructions.

Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the SVC cluster likely have
different performance attributes because of the type of disk or RAID array on which they are
installed. The MDisks can be on 15 K RPM Fibre Channel (FC) or serial-attached SCSI (SAS)
disk, Nearline SAS, or Serial Advanced Technology Attachment (SATA), or even Flash Disks.
Therefore, a storage tier attribute is assigned to each MDisk and the default is generic_hdd.
SVC 6.1 introduced a new disk tier attribute for Flash Disk, which is known as generic_ssd.

Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data
placement of a volume’s extents in a multitiered storage pool. The pool normally contains a
mix of Flash Disks and HDDs. Easy Tier measures host I/O activity on the volume’s extents
and migrates hot extents onto the Flash Disks to ensure the maximum performance.

Encryption of data at rest


Encryption of data at rest is the encryption of the data that is on the storage system.

Enhanced Stretched Systems


A stretched system is an extended high availability (HA) method that is supported by the SVC
to enable I/O operations to continue after the loss of half of the system. Enhanced Stretched
Systems provide the following primary benefits. In addition to the automatic failover that
occurs when a site fails in a standard stretched system configuration, an Enhanced Stretched
System provides a manual override that can be used to choose which one of two sites
continues operation. Enhanced Stretched Systems intelligently route I/O traffic between
nodes and controllers to reduce the amount of I/O traffic between sites, and to minimize the
impact to host application I/O latency. Enhanced Stretched Systems include an
implementation of additional policing rules to ensure that the correct configuration of a
standard stretched system is used.

Evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the
volume extents in a pool are “measured” only. No automatic extent migration is performed.

892 Implementing the IBM System Storage SAN Volume Controller V7.4
Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process.
Before SVC V6.1, this situation was known as an error.

Event code
An event code is a value that is used to identify an event condition to a user. This value might
map to one or more event IDs or to values that are presented on the service panel. This value
is used to report error conditions to IBM and to provide an entry point into the service guide.

Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
SVC. An event ID is used internally in the cluster to identify the error.

Excluded condition
The excluded condition is a status condition. It describes an MDisk that the SVC decided is
no longer sufficiently reliable to be managed by the cluster. The user must issue a command
to include the MDisk in the cluster-managed storage.

Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range 16 MB - 8 GB in size.

External storage
External storage refers to managed disks (MDisks) that are SCSI logical units that are
presented by storage systems that are attached to and managed by the clustered system.

Failback
Failback is the restoration of an appliance to its initial configuration after the detection and
repair of a failed network or component.

Failover
Failover is an automatic operation that switches to a redundant or standby system or node in
a software, hardware, or network interruption. See also “Failback”.

Fibre Channel (FC) port logins


Fibre Channel (FC) port logins refer to the number of hosts that can see any one SVC node
port. The SVC has a maximum limit per node port of FC logins that are allowed.

Field-replaceable units
Field-replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the unit’s components fails. They are held as spares by the IBM service organization.

FlashCopy
FlashCopy refers to a point-in-time copy where a virtual copy of a volume is created. The
target volume maintains the contents of the volume at the point in time when the copy was
established. Any subsequent write operations to the source volume are not reflected on the
target volume.

Appendix B. Terminology 893


FlashCopy mapping
A FlashCopy mapping is a continuous space on a direct-access storage volume, which is
occupied by or reserved for a particular data set, data space, or file.

FlashCopy relationship
See “FlashCopy mapping” on page 894.

FlashCopy service
FlashCopy service is a copy service that duplicates the contents of a source volume on a
target volume. In the process, the original contents of the target volume are lost. See also
“Point-in-time copy” on page 897.

Front end and back end


The SVC takes MDisks to create pools of capacity from which volumes are created and
presented to application servers (hosts). The MDisks are in the controllers at the back end of
the SVC and in the SVC to the back-end controller zones. The volumes that are presented to
the hosts are in the front end of the SVC.

Global Mirror
Global Mirror is a method of asynchronous replication that maintains data consistency across
multiple volumes within or across multiple systems. Global Mirror is generally used where
distances between the source site and target site cause increased latency beyond what the
application can accept.

Global Mirror with Change Volumes


Change volumes are used to record changes to the primary and secondary volumes of a
remote copy relationship. A FlashCopy mapping exists between a primary and its change
volume and a secondary and its change volume.

Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KiB or
256 KiB) in the SVC. A grain is also the unit to extend the real size of a thin-provisioned
volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).

Host bus adapter (HBA)


A host bus adapter (HBA) is an interface card that connects a server to the SAN environment
through its internal bus system, for example, PCI Express.

Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or Internet Small
Computer System Interface (iSCSI) host names for LUN mapping. For each host ID, SCSI IDs
are mapped to volumes separately. The intent is to have a one-to-one relationship between
hosts and host IDs, although this relationship cannot be policed.

Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster. (Host mapping is equivalent to LUN masking.) Before SVC V6.1, this
process was known as VDisk-to-host mapping.

894 Implementing the IBM System Storage SAN Volume Controller V7.4
Hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if it is
moved from an HDD onto a Flash Disk.

Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents in the
storage pool (existing LUN or (image mode) MDisk) with the extents in the volume.

Image volume
An image volume is a volume in which a direct block-for-block translation exists from the
managed disk (MDisk) to the volume.

I/O Group
Each pair of SVC cluster nodes is known as an input/output (I/O) Group. An I/O Group has a
set of volumes that are associated with it that are presented to host systems. Each SVC node
is associated with exactly one I/O Group. The nodes in an I/O Group provide a failover and
failback function for each other.

Internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in
enclosures and in nodes that are part of the SVC cluster.

Internet Small Computer System Interface (iSCSI) qualified name (IQN)


Internet Small Computer System Interface (iSCSI) qualified name (IQN) refers to special
names that identify both iSCSI initiators and targets. IQN is one of the three name formats
that is provided by iSCSI. The IQN format is iqn.yyyy-mm.{reversed domain name}. For
example, the default for an SVC node is:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>.

Internet storage name service (iSNS)


iSNS refers to the Internet storage name service (iSNS) protocol that is used by a host
system to manage iSCSI targets and the automated iSCSI discovery, management, and
configuration of iSCSI and FC devices. It was defined in Request for Comments (RFC) 4171.

Inter-switch link (ISL) hop


An inter-switch link (ISL) is a connection between two switches and counted as one ISL hop.
The number of hops is always counted on the shortest route between two N-ports (device
connections). In an SVC environment, the number of ISL hops is counted on the shortest
route between the pair of nodes that are farthest apart. The SVC supports a maximum of
three ISL hops.

Lightweight Directory Access Protocol (LDAP)


Lightweight Directory Access Protocol (LDAP) is an open protocol that uses TCP/IP to
provide access to directories that support an X.500 model and that does not incur the
resource requirements of the more complex X.500 Directory Access Protocol (DAP). For
example, LDAP can be used to locate people, organizations, and other resources in an
Internet or intranet directory.

Local and remote fabric interconnect


The local fabric interconnect and the remote fabric interconnect are the SAN components that
are used to connect the local and remote fabrics. Depending on the distance between the two

Appendix B. Terminology 895


fabrics, they can be single-mode optical fibers that are driven by long wave (LW) gigabit
interface converters (GBICs) or small form-factor pluggables (SFPs), or more sophisticated
components, such as channel extenders or special SFP modules that are used to extend the
distance between SAN components.

Local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect
the components (nodes, hosts, and switches) of the local cluster together.

Logical unit (LU) and logical unit number (LUN)


The logical unit (LUN) is defined by the SCSI standards as a logical unit number (LUN). LUN
is an abbreviation for an entity that exhibits disk-like behavior, for example, a volume or an
MDisk.

Managed disk (MDisk)


A managed disk (MDisk) is a SCSI disk that is presented by a RAID controller and managed
by the SVC. The MDisk is not visible to host systems on the SAN.

Managed disk group (storage pool)


See “Storage pool (managed disk group)” on page 900.

Metro Global Mirror


Metro Mirror Global is a cascaded solution where Metro Mirror synchronously copies data to
the target site. This Metro Mirror target is the source volume for Global Mirror that
asynchronously copies data to a third site. This solution has the potential to provide disaster
recovery with no data loss at Global Mirror distances when the intermediate site does not
participate in the disaster that occurs at the production site.

Metro Mirror
Metro Mirror is a method of synchronous replication that maintains data consistency across
multiple volumes within the system. Metro Mirror is generally used when the write latency that
is caused by the distance between the source site and target site is acceptable to application
performance.

Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the SVC as copy 0 and the secondary copy is known within the
SVC as copy 1.

Node
An SVC node is a hardware entity that provides virtualization, cache, and copy services for
the cluster. The SVC nodes are deployed in pairs that are called I/O Groups. One node in a
clustered system is designated as the configuration node.

Node canister
A node canister is a hardware unit that includes the node hardware, fabric and service
interfaces, and serial-attached SCSI (SAS) expansion ports.

896 Implementing the IBM System Storage SAN Volume Controller V7.4
Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections
to the traffic on the most heavily loaded ISLs, where more than one connection is used
between these switches. Oversubscription assumes a symmetrical network, and a specific
workload that is applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.

Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of
the same size. Volumes are created from the extents that are available in the pool. You can
add MDisks to a pool at any time either to increase the number of extents that are available
for new volume copies or to expand existing volume copies. The system automatically
balances volume extents between the MDisks to provide the best performance to the
volumes.

Point-in-time copy
A point-in-time copy is the instantaneous copy that the FlashCopy service makes of the
source volume. See also “FlashCopy service” on page 894.

Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volume’s data from cache in preparation for the FlashCopy
operation.

Private fabric
Configure one SAN per fabric so that it is dedicated for node-to-node communication. This
SAN is referred to as a private SAN.

Public fabric
Configure one SAN per fabric so that it is dedicated for host attachment, storage system
attachment, and remote copy operations. This SAN is referred to as a public SAN. You can
configure the public SAN to allow SVC node-to-node communication also. You can optionally
use the -localportfcmask parameter of the chsystem command to constrain the node-to-node
communication to use only the private SAN.

Quorum disk
A disk that contains a reserved area that is used exclusively for system management. The
quorum disk is accessed when it is necessary to determine which half of the clustered system
continues to read and write data. Quorum disks can either be MDisks or drives.

Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a tie. Nodes
attempt to lock the first quorum disk (index 0), followed by the next disk (index 1), and finally
the last disk (index 2). The tie is broken by the node that locks them first.

RACE engine
The RACE engine compresses data on volumes in real time with minimal impact on
performance. See “Compression” on page 891 or “Real-time Compression” on page 898.

Appendix B. Terminology 897


Real capacity
Real capacity is the amount of storage that is allocated to a volume copy from a storage pool.

Real-time Compression
Real-time Compression is an IBM integrated software function for storage space efficiency.
The RACE engine compresses data on volumes in real time with minimal impact on
performance.

Redundant Array of Independent Disks (RAID)


RAID stands for a Redundant Array of Independent Disks, with two or more physical disk
drives that are combined in an array in a certain way, which incorporates a RAID level for
failure protection or better performance. The most common RAID levels are 0, 1, 5, 6, and 10.

RAID 0
RAID 0 is a data striping technique that is used across an array and no data protection is
provided.

RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.

RAID 10
RAID 10 is a combination of a RAID 0 stripe that is mirrored (RAID 1). Therefore, two identical
copies of striped data exist; no parity exists.

RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.

RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. Therefore, this level can continue to process read and write requests to
all of the array’s virtual disks in the presence of two concurrent disk failures.

Redundant storage area network (SAN)


A redundant storage area network (SAN) is a SAN configuration in which there is no single
point of failure (SPoF); therefore, data traffic continues no matter what component fails.
Connectivity between the devices within the SAN is maintained (although possibly with
degraded performance) when an error occurs. A redundant SAN design is normally achieved
by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so that if one
path of the counterpart SAN is destroyed, the other counterpart SAN path keeps functioning.

Relationship
In Metro Mirror or Global Mirror, a relationship is the association between a master volume
and an auxiliary volume. These volumes also have the attributes of a primary or secondary
volume.

Reliability, availability, and serviceability (RAS)


Reliability, availability, and serviceability (RAS) are a combination of design methodologies,
system policies, and intrinsic capabilities that, when taken together, balance improved
hardware availability with the costs that are required to achieve it.

898 Implementing the IBM System Storage SAN Volume Controller V7.4
Reliability is the degree to which the hardware remains free of faults. Availability is the ability
of the system to continue operating despite predicted or experienced faults. Serviceability is
how efficiently and nondisruptively broken hardware can be fixed.

Remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that
connect the components (nodes, hosts, and switches) of the remote cluster together.
Significant distances can exist between the components in the local cluster and those
components in the remote cluster.

Serial-attached Small Computer System Interface (SCSI) (SAS)


Serial-attached Small Computer System Interface (SCSI) (SAS) is a method that is used in
accessing computer peripheral devices that employs a serial (one bit at a time) means of
digital data transfer over thin cables. The method is specified in the American National
Standard Institute standard called SAS. In the business enterprise, SAS is useful for access
to mass storage devices, particularly external hard disk drives.

Service Location Protocol


The Service Location Protocol (SLP) is an Internet service discovery protocol that allows
computers and other devices to find services in a local area network (LAN) without prior
configuration. It was defined in the request for change (RFC) 2608.

Small Computer Systems Interface (SCSI)


Small Computer Systems Interface (SCSI) is an ANSI-standard electronic interface with
which personal computers can communicate with peripheral hardware, such as disk drives,
tape drives, CD-ROM drives, printers, and scanners, faster and more flexibly than with
previous interfaces.

Snapshot
A snapshot is an image backup type that consists of a point-in-time view of a volume.

Solid-state disk (SSD)


A solid-state disk (SSD) or Flash Disk is a disk that is made from solid-state memory and
therefore has no moving parts. Most SSDs use NAND-based flash memory technology. It is
defined to the SVC as a disk tier generic_ssd.

Space efficient
See “Thin provisioning” on page 900.

Space-efficient virtual disk (VDisk)


See “Thin-provisioned volume” on page 900.

Storage area network (SAN)


A storage area network (SAN) is a dedicated storage network that is tailored to a specific
environment, which combines servers, systems, storage products, networking products,
software, and services.

Storage area network (SAN) Volume Controller (SVC)


The IBM Storage System SAN Volume Controller (SVC) is an appliance that is designed for
attachment to various host computer systems. The SVC performs block-level virtualization of
disk storage.

Appendix B. Terminology 899


Storage pool (managed disk group)
A storage pool is a collection of storage capacity, which is made up of managed disks
(MDisks), that provides the pool of storage capacity for a specific set of volumes. A storage
pool can contain more than one tier of disk, which is known as a multitier storage pool and a
prerequisite of Easy Tier automatic data placement. Before SVC V6.1, this storage pool was
known as a managed disk group (MDG).

Stretched system
A stretched system is an extended high availability (HA) method that is supported by SVC to
enable I/O operations to continue after the loss of half of the system. A stretched system is
also sometimes referred to as a split system. One half of the system and I/O Group is usually
in a geographically distant location from the other, often 10 kilometers (6.2 miles) or more. A
third site is required to host a storage system that provides a quorum disk.

Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical storage, in the form
of a Redundant Array of Independent Disks (RAID), is split into smaller chunks of storage
known as extents. These extents are then concatenated, by using various policies, to make
volumes. See also “Asymmetric virtualization” on page 890.

Synchronous replication
Synchronous replication is a type of replication in which the application write operation is
made to both the source volume and target volume before control is given back to the
application. See also “Asynchronous replication” on page 890.

Thin-provisioned volume
A thin-provisioned volume is a volume that allocates storage when data is written to it.

Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a “logical” capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Therefore, a thin-provisioned volume is a volume with a virtual capacity that
differs from its real capacity. Before SVC V6.1, this thin-provisioned volume was known as
space efficient.

T10 DIF
T10 DIF is a “Data Integrity Field” extension to SCSI to allow for end-to-end protection of data
from host application to physical media.

Unique identifier (UID)


A unique identifier is an identifier that is assigned to storage-system logical units when they
are created. It is used to identify the logical unit regardless of the logical unit number (LUN),
the status of the logical unit, or whether alternate paths exist to the same device. Typically, a
UID is used only once.

Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is created that
contains several storage systems. Storage systems from various vendors can be used. The
pool can be split into volumes that are visible to the host systems that use them. See also
“Capacity licensing” on page 890.

900 Implementing the IBM System Storage SAN Volume Controller V7.4
Virtualized storage
Virtualized storage is physical storage that has virtualization techniques applied to it by a
virtualization engine.

Virtual local area network (VLAN)


Virtual local area network (VLAN) tagging separates network traffic at the layer 2 level for
Ethernet transport. The system supports VLAN configuration on both IPv4 and IPv6
connections.

Virtual storage area network (VSAN)


A virtual storage area network (VSAN) is a fabric within the storage area network (SAN).

Vital product data (VDP or VPD)


Vital product data (VDP or VPD) is information that uniquely defines system, hardware,
software, and microcode elements of a processing system.

Volume
A volume is an SVC logical device that appears to host systems that are attached to the SAN
as a SCSI disk. Each volume is associated with exactly one I/O Group. A volume has a
preferred node within the I/O Group. Before SVC 6.1, this volume was known as a VDisk or
virtual disk.

Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.

Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the system supports a
global setting that prevents these objects from being deleted if the system detects that they
have recent I/O activity. When you delete a volume, the system checks to verify whether it is
part of a host mapping, FlashCopy mapping, or remote-copy relationship. In these cases, the
system fails to delete the volume, unless the -force parameter is specified. Using the -force
parameter can lead to unintentional deletions of volumes that are still active. Active means
that the system detected recent I/O activity to the volume from any host.

Write-through mode
Write-through mode is a process in which data is written to a storage device at the same time
that the data is cached.

Appendix B. Terminology 901


902 Implementing the IBM System Storage SAN Volume Controller V7.4
C

Appendix C. SAN Volume Controller stretched


cluster
In this appendix, we briefly describe the IBM SAN Volume Controller (SVC) stretched cluster
(formerly known as a Split I/O Group). We also explain the term Enhanced Stretched Cluster.

We do not provide technical details or implementation guidelines in this appendix. For more
information, see IBM SAN Volume Controller Enhanced Stretched Cluster with VMware,
SG24-8211.

For more information about Enhanced Stretched Cluster prerequisites, see this website:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome

Detailed guidance how to configure the SVC in a stretched cluster configuration and its
integration with the VMware environment is described in IBM SAN Volume Controller
Enhanced Stretched Cluster with VMware, SG24-8211.

The implementation scenarios of the SVC stretched cluster in an AIX virtualized or clustered
environment are available in IBM SAN Volume Controller Stretched Cluster with PowerVM
and PowerHA, SG24-8142.

© Copyright IBM Corp. 2015. All rights reserved. 903


Stretched cluster overview
Starting with SVC 5.1, IBM introduced General Availability (GA) support for SVC node
distribution across two independent locations up to 10 km (6.2 miles) apart.

With SVC 6.3, IBM offers significant enhancements for a Split I/O Group in one of the
following configurations:
򐂰 No inter-switch link (ISL) configuration:
– Passive Wavelength Division Multiplexing (WDM) devices can be used between both
sites.
– No ISLs can be used between the SVC nodes (similar to the SVC 5.1 supported
configuration).
– The distance extension is to up to 40 km (24.8 miles).
򐂰 ISL configuration:
– ISLs are allowed between the SVC nodes (not allowed with releases earlier than 6.3).
– The maximum distance is similar to Metro Mirror (MM) distances.
– The physical requirements are similar to MM requirements.
– ISL distance extension is allowed with active and passive WDM devices.

This chapter describes the characteristics of both configurations.

SVC 7.4 further exploits the options for Enhanced Stretched Cluster by the automatic
selection of quorum disks, and the placement of one quorum disk in each of the three sites.
Users can still manually select quorum disks in each of the three sites if they want.

Non-ISL configuration
In a non-ISL configuration, each IBM SVC I/O Group consists of two independent SVC nodes.
In contrast to a standard SVC environment, nodes from the same I/O Group are not placed
close together; instead, they are distributed across two sites. If a node fails, the other node in
the same I/O Group takes over the workload, which is standard in an SVC environment.

Volume mirroring provides a consistent data copy in both sites. If one storage subsystem fails,
the remaining subsystem processes the I/O requests. The combination of SVC node
distribution in two independent data centers and a copy of data in two independent data
centers creates a new level of availability, the stretched cluster.

All SVC nodes and the storage system in a single site might fail; the other SVC nodes take
over the server load by using the remaining storage systems. The volume ID, behavior, and
assignment to the server are still the same. No server reboot, no failover scripts, and
therefore no script maintenance are required.

However, you must consider that a stretched cluster typically requires a specific setup and
might exhibit substantially reduced performance. In a stretched cluster environment, the SVC
nodes from the same I/O Group are in two sites. A third quorum location is required for
handling “split brain” scenarios.

Figure C-1 on page 905 shows an example of a non-ISL stretched cluster configuration as it
is supported in SVC V5.1.

904 Implementing the IBM System Storage SAN Volume Controller V7.4
Figure C-1 Standard SVC environment that uses volume mirroring

The stretched cluster uses SVC volume mirroring functionality. Volume mirroring allows the
creation of one volume with two copies of MDisk extents; no two volumes have the same data
on them. The two data copies can be in different MDisk groups. Therefore, volume mirroring
can minimize the effect on volume availability if one set of MDisks goes offline. The
resynchronization between both copies after recovering from a failure is incremental; the SVC
starts the resynchronization process automatically.

As with a standard volume, each mirrored volume is owned by one I/O Group with a preferred
node. Therefore, the mirrored volume goes offline if the whole I/O Group goes offline. The
preferred node performs all I/O operations, which mean reads and writes. The preferred node
can be set manually.

The quorum disk keeps the status of the mirrored volume. The last status (in sync or not in
sync) and the definitions of the primary and secondary volume copy are saved there.
Therefore, an active quorum disk is required for volume mirroring. To ensure data
consistency, the SVC disables all mirrored volumes if no access exists to any quorum disk
candidate.

Consider the following preferred practices for stretched cluster:


򐂰 Drive read I/O to the local storage system.
򐂰 For distances less than 10 km (6.2 miles), drive the read I/O to the faster of the two disk
subsystems if they are not identical.
򐂰 Consider long-distance links.
򐂰 The preferred node must stay at the same site as the server that is accessing the volume.
򐂰 The volume mirroring primary copy must stay at the same site as the server that is
accessing the volume to avoid any potential latency effect where the longer distance
solution is implemented.

In many cases, no independent third site is available. It is possible to use an existing building
or computer room from the two main sites to create a third, independent failure domain.

Appendix C. SAN Volume Controller stretched cluster 905


Consider the following points:
򐂰 The third failure domain needs an independent power supply or uninterruptible power
supply. If the hosting site fails, the third failure domain must continue to operate.
򐂰 A separate storage controller for the active SVC quorum disk is required. Otherwise, the
SVC loses multiple quorum disk candidates at the same time if a single storage
subsystem fails.
򐂰 Each site (failure domain) must be placed in a different location in case of fire.
򐂰 Fibre Channel (FC) cabling must not go through another site (failure domain). Otherwise,
a fire in one failure domain might destroy the links (and break access) to the SVC quorum
disk.

As shown in Figure C-1 on page 905, the setup is similar to a standard SVC environment, but
the nodes are distributed to two sites. The GUI representation of a stretched cluster is
illustrated in Figure C-2.

Figure C-2 Stretched cluster overview in GUI

The SVC nodes and data are equally distributed across two separate sites with independent
power sources, which are named as separate failure domains (Failure Domain 1 and Failure
Domain 2). The quorum disk is in a third site with a separate power source (Failure Domain
3).

Each I/O Group requires four dedicated fiber-optic links between site 1 and site 2.

If the non-ISL configuration is implemented over a 10 km (6.2 mile) distance, passive WDM
devices (without power) can be used to pool multiple fiber-optic links with different
wavelengths in one or two connections between both sites. Small form-factor pluggables
(SFPs) with different wavelengths or “colored SFPs”, that is, SFPs that are used in Coarse
Wave Division Multiplexing (CWDM) devices are required here.

The maximum distance between both major sites is limited to 40 km (24.8 miles).

Because we must prevent the risk of burst traffic (because of the lack of buffer-to-buffer
credits), the link speed must be limited. The link speed depends on the cable length between
the nodes in the same I/O Group, as shown in Table C-1 on page 907.

906 Implementing the IBM System Storage SAN Volume Controller V7.4
Table C-1 SVC code level lengths and speed
SVC code level Minimum length Maximum length Maximum link speed

>= SVC 5.1 >= 0 km = 10 km (6.2 miles) 8 Gbps

>= SVC 6.3 >= 10 km (6.2 miles) = 20 km (12.4 miles) 4 Gbps

>= SVC 6.3 >= 20 km (12.4 miles) = 40 km (24.8 miles) 2 Gbps

With ISLs between SVC nodes, the maximum distance is similar to Metro Mirror distances
(300 km or 186.4 miles). The physical requirements are similar to Metro Mirror requirements,
with an ISL distance extension for active and passive WDM.

The quorum disk at the third site must be FC-attached. Fibre Channel over IP (FCIP) can be
used if the round-trip delay time to the third site is always less than 80 ms, which is 40 ms in
each direction.

Table C-2 provides an overview of features by the SVC stretched cluster in each code
version.

Table C-2 Stretched cluster features in SVC versions


Features/code level 5.1 6.2 6.3 6.4.0 6.4.1 7.1 7.2

Non-ISL stretched cluster; separate Yes Yes Yes Yes Yes Yes Yes
links between SVC nodes and remote
SAN switches; up to 10 km (6.2 miles);
passive CWDM and passive dense
wavelength division multiplexing
(DWDM)

Dynamic quorum disk V2 N/A Yes Yes Yes Yes Yes Yes

Non-ISL stretched cluster up to 40 km N/A Per Yes Yes Yes Yes Yes
(24.8 miles) quote

ISL stretched cluster with private and N/A N/A Yes Yes Yes Yes Yes
public fabric: up to 300 km (186.4 miles)

Active DWDMs and CWDMs for N/A N/A Yes Yes Yes Yes Yes
non-ISL and ISL stretched cluster

ISL stretched cluster using Fibre N/A N/A N/A Yes Yes Yes Yes
Channel over Ethernet (FCoE) ports for
private fabrics

Support of eight FC ports per SVC node N/A N/A N/A N/A Per Yes Yes
quote

Enhanced mode (site awareness) N/A N/A N/A N/A N/A N/A Yes

Appendix C. SAN Volume Controller stretched cluster 907


Figure C-3 shows an example in which passive WDMs are used to extend the links between
site 1 and site 2.

Figure C-3 Connection with passive WDMs

The best performance server in site 1 must access the volumes in site 1 (preferred node and
primary copy in site 1). SVC volume mirroring copies the data to storage 1 and storage 2. A
similar setup must be implemented for the servers in site 2 with access to the SVC node in
site 2.

The configuration that is shown in Figure C-3 covers the following failover cases:
򐂰 Power off FC switch 1: FC switch 2 takes over the load and routes I/O to SVC node 1 and
SVC node 2.
򐂰 Power off SVC node 1: SVC node 2 takes over the load and serves the volumes to the
server. SVC node 2 changes the cache mode to write-through to avoid data loss in case
SVC node 2 fails, as well.
򐂰 Power off storage 1: The SVC waits a short time (15 - 30 seconds), pauses volume copies
on storage 1, and continues I/O operations by using the remaining volume copies on
storage 2.
򐂰 Power off site 1: The server no longer has access to the local switches, which causes the
loss of access. You optionally can avoid this loss of access by using more fiber-optic links
between site 1 and site 2 for server access.

908 Implementing the IBM System Storage SAN Volume Controller V7.4
The same scenarios are valid for site 2 and similar scenarios apply in a mixed failure
environment, for example, the failure of switch 1, SVC node 2, and storage 2. No manual
failover or failback activities are required because the SVC performs the failover or failback
operation.

The use of AIX Live Partition Mobility or VMware vMotion can increase the number of use
cases significantly. Online system migrations are possible, including running virtual machines
and applications. Online system migrations are an acceptable functionality to handle
maintenance operations in an appropriate way.

Advantages
A non-ISL configuration includes the following advantages:
򐂰 The business continuity solution is distributed across two independent data centers.
򐂰 The configuration is similar to a standard SVC clustered system.
򐂰 Limited hardware effort: Passive WDM devices can be used, but are not required.

Requirements
A non-ISL configuration includes the following requirements:
򐂰 Four independent fiber-optic links for each I/O Group between both data centers.
򐂰 Long-wave SFPs with support over long distance for direct connection to remote SAN.
򐂰 Optional usage of passive WDM devices.
򐂰 Passive WDM device: No power is required for operation.
򐂰 “Colored SFPs” to make different wavelength available.
򐂰 “Colored SFPs” must be supported by the switch vendor.
򐂰 Two independent fiber-optic links between site 1 and site 2 are recommended.
򐂰 Third site for quorum disk placement.
򐂰 Quorum disk storage system must use FC for attachment with similar requirements, such
as Metro Mirror storage (80 ms round-trip delay time, which is 40 ms in each direction).

When possible, use two independent fiber-optic links between site 1 and 2.

Bandwidth reduction
Buffer credits, which are also called buffer-to-buffer credits, are used as a flow control method
by FC technology and represent the number of frames that a port can store.

Therefore, buffer-to-buffer credits are necessary to have multiple FC frames in parallel


in-flight. An appropriate number of buffer-to-buffer credits are required for optimal
performance. The number of buffer credits to achieve the maximum performance over a
specific distance depends on the speed of the link, as shown in the following examples:
򐂰 1 buffer credit = 2 km (1.2 miles) at 1 Gbps
򐂰 1 buffer credit = 1 km (.62 miles) at 2 Gbps
򐂰 1 buffer credit = 0.5 km (.3 miles) at 4 Gbps
򐂰 1 buffer credit = 0.25 km (.15 miles) at 8 Gbps
򐂰 1 buffer credit = 0.125 km (0.08 miles) at 16 Gbps

These guidelines give the minimum numbers. The performance drops if insufficient buffer
credits exist, according to the link distance and link speed, as shown in Table C-3 on
page 910.

Appendix C. SAN Volume Controller stretched cluster 909


Table C-3 FC link speed buffer-to-buffer and distance
FC link speed Buffer-to-buffer credits for 10 Distance with eight credits
km (6.2 miles)

1 Gbps 5 16 km (9.9 miles)

2 Gbps 10 8 km (4.9 miles)

4 Gbps 20 4 km (2.4 miles)

8 Gbps 40 2 km (1.2 miles)

The number of buffer-to-buffer credits that is provided by an SVC FC host bus adapter (HBA)
is limited. An HBA of a 2145-CF8 node provides 41 buffer credits, which are sufficient for a
10 km (6.2 miles) distance at 8 Gbps. The SVC adapters in all earlier models provide only
eight buffer credits, which are enough only for a 4 km (2.4 miles) distance with a 4 Gbps link
speed. These numbers are determined by the hardware of the HBA and cannot be changed.
We suggest that you use 2145-CF8 or CG8 nodes for distances longer than 4 km (2.4 miles)
to provide enough buffer-to-buffer credits at a reasonable FC speed.

Inter-switch link configuration


Where a longer distance that is beyond 40 km (24.8 miles) between site 1 and site 2 is
required, a new configuration must be applied. The setup is similar to a standard SVC
environment, but the nodes are allowed to communicate over long distance by using ISLs
between both sites that use active or passive WDM and a different SAN configuration.
Figure C-4 shows a configuration with active/passive WDM.

Figure C-4 Connection with active/passive WDM and ISL

910 Implementing the IBM System Storage SAN Volume Controller V7.4
The stretched cluster configuration that is shown in Figure C-4 on page 910 supports
distances of up to 300 km (186.4 miles), which is the same as the recommended distance for
Metro Mirror.

Technically, the SVC tolerates a round-trip delay of up to 80 ms between nodes. Cache


mirroring traffic (rather than Metro Mirror traffic) is sent across the inter-site link and data is
mirrored to back-end storage by using volume mirroring.

Data is written by the preferred node to the local and remote storage. The Small Computer
System Interface (SCSI) write protocol results in two round trips. This latency is hidden from
the application by the write cache.

The stretched cluster is often used to move the workload between servers at separate sites.
VMotion or the equivalent can be used to move applications between servers; therefore,
applications no longer necessarily issue I/O requests to the local SVC nodes.

SCSI write commands from hosts to remote SVC nodes result in another two round trips’
worth of latency that is visible to the application. For stretched cluster configurations in a
long-distance environment, we advise that you use the local site for host I/O. Certain switches
and distance extenders use extra buffers and proprietary protocols to eliminate one of the
round trip’s worth of latency for SCSI write commands.

These devices are supported for use with the SVC. They do not benefit or affect inter-node
communication; however, they benefit the host to remote SVC I/Os and the SVC to remote
storage controller I/Os.

Requirements
A stretched cluster with ISL configuration must meet the following requirements:
򐂰 Four independent, extended SAN fabrics are shown in Figure C-4 on page 910. Those
fabrics are named Public SAN1, Public SAN2, Private SAN1, and Private SAN2. Each
Public or Private SAN can be created with a dedicated FC switch or director, or they can
be a virtual SAN in a CISCO or Brocade FC switch or director.
򐂰 Two ports per SVC node attach to the private SANs.
򐂰 Two ports per SVC node attach to the public SANs.
򐂰 SVC volume mirroring exists between site 1 and site 2.
򐂰 Hosts and storage attach to the public SANs.
򐂰 The third site quorum disk attaches to the public SANs.
Figure C-5 on page 912 shows the possible configurations with a virtual SAN.

Appendix C. SAN Volume Controller stretched cluster 911


Figure C-5 ISL configuration with a virtual SAN

Figure C-6 shows the possible configurations with a physical SAN.

Figure C-6 ISL configuration with a physical SAN

912 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 Use a third site to house a quorum disk. Connections to the third site can be through FCIP
because of the distance (no FCIP or FC switches were shown in the previous layouts for
simplicity). In many cases, no independent third site is available.
It is possible to use an existing building from the two main sites to create a third,
independent failure domain, but you have the following considerations:
– The third failure domain needs an independent power supply or uninterruptible power
supply. If the hosting site failed, the third failure domain needs to continue to operate.
– Each site (failure domain) must be placed in a separate fire compartment.
– FC cabling must not go through another site (failure domain). Otherwise, a fire in one
failure domain destroys the links (and breaks the access) to the SVC quorum disk.
Applying these considerations, the SVC clustered system can be protected, although two
failure domains are in the same building. Consider an IBM Advanced Technical Support
(ATS) review or processing a request for price quotation (RRQ)/Solution for Compliance in
a Regulated Environment (SCORE) to review the proposed configuration.
The storage system that provides the quorum disk at the third site must support extended
quorum disks. Storage systems that provide extended quorum support are available at this
website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003907
򐂰 Four active/passive WDMs, two for each site, are needed to extend the public and private
SAN over a distance.
򐂰 Place independent storage systems at the primary and secondary sites. Use volume
mirroring to mirror the host data between storage systems at the two sites.
򐂰 The SVC nodes that are in the same I/O Group must be in two remote sites.

More information
For more information about the SVC stretched cluster and Enhanced Stretched Cluster,
including planning, implementation, configuration steps, and troubleshooting, see the
following resources:
򐂰 IBM SAN Volume Controller Enhanced Stretched Cluster with VMware, SG24-8211
򐂰 IBM SAN Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142
򐂰 IBM System Storage SAN Volume Controller Best Practices and Performance Guidelines,
SG24-7521
򐂰 IBM SAN Volume Controller Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome

Appendix C. SAN Volume Controller stretched cluster 913


914 Implementing the IBM System Storage SAN Volume Controller V7.4
Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.

IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
򐂰 IBM SAN Volume Controller 2145-DH8 Introduction and Implementation, SG24-8229
򐂰 Implementing the IBM Storwize V7000 Gen2, SG24-8244
򐂰 Implementing the IBM System Storage SAN Volume Controller V7.2, SG24-7933
򐂰 Implementing the IBM Storwize V7000 V7.2, SG24-7938
򐂰 IBM b-type Gen 5 16 Gbps Switches and Network Advisor, SG24-8186
򐂰 Introduction to Storage Area Networks and System Networking, SG24-5470
򐂰 IBM SAN Volume Controller and IBM FlashSystem 820: Best Practices and Performance
Capabilities, REDP-5027
򐂰 Implementing the IBM SAN Volume Controller and FlashSystem 820, SG24-8172
򐂰 Implementing IBM FlashSystem 840, SG24-8189
򐂰 IBM FlashSystem in IBM PureFlex System Environments, TIPS1042
򐂰 IBM FlashSystem 840 Product Guide, TIPS1079
򐂰 IBM FlashSystem 820 Running in an IBM StorwizeV7000 Environment, TIPS1101
򐂰 Implementing FlashSystem 840 with SAN Volume Controller, TIPS1137
򐂰 IBM FlashSystem V840 Enterprise Performance Solution, TIPS1158
򐂰 IBM Midrange System Storage Implementation and Best Practices Guide, SG24-6363
򐂰 IBM System Storage b-type Multiprotocol Routing: An Introduction and Implementation,
SG24-7544
򐂰 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
򐂰 Tivoli Storage Productivity Center for Replication for Open Systems, SG24-8149
򐂰 Tivoli Storage Productivity Center V5.2 Release Guide, SG24-8204
򐂰 Implementing an IBM b-type SAN with 8 Gbps Directors and Switches, SG24-6116

You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks

© Copyright IBM Corp. 2015. All rights reserved. 915


Other resources
These publications are also relevant as further information sources:
򐂰 IBM System Storage Master Console: Installation and User’s Guide, GC30-4090
򐂰 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface User's Guide, SC26-7544
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment
Guide, SC26-7563
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
򐂰 IBM System Storage SAN Volume Controller - Software Installation and Configuration
Guide, SC23-6628
򐂰 IBM System Storage SAN Volume Controller V6.2.0 - Software Installation and
Configuration Guide, GC27-2286:
http://pic.dhe.ibm.com/infocenter/svc/ic/topic/com.ibm.storage.svc.console.doc/
svc_bkmap_confguidebk.pdf
򐂰 IBM System Storage SAN Volume Controller 6.2.0 Configuration Limits and Restrictions,
S1003799
򐂰 IBM TotalStorage Multipath Subsystem Device Driver User’s Guide, SC30-4096
򐂰 IBM XIV and SVC/ Best Practices Implementation Guide:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105195
򐂰 Considerations and Comparisons between IBM SDD for Linux and DM-MPIO:
http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&q1=linux&uid=ssg1S
7001664&loc=en_US&cs=utf-8&lang=en

Referenced websites
These websites are also relevant as further information sources:
򐂰 IBM Storage home page:
http://www.storage.ibm.com
򐂰 IBM site to download SSH for AIX:
http://oss.software.ibm.com/developerworks/projects/openssh
򐂰 IBM Tivoli Storage Area Network Manager site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe
tworkManager.html

916 Implementing the IBM System Storage SAN Volume Controller V7.4
򐂰 IBM TotalStorage Virtualization home page:
http://www-1.ibm.com/servers/storage/software/virtualization/index.html
򐂰 SAN Volume Controller supported platform:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
򐂰 SAN Volume Controller Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome
򐂰 Cygwin Linux-like environment for Windows:
http://www.cygwin.com
򐂰 Microsoft Knowledge Base Article 131658:
http://support.microsoft.com/support/kb/articles/Q131/6/58.asp
򐂰 Microsoft Knowledge Base Article 149927:
http://support.microsoft.com/support/kb/articles/Q149/9/27.asp
򐂰 Open source site for SSH for Windows and Mac:
http://www.openssh.com/windows.html
򐂰 Sysinternals home page:
http://www.sysinternals.com
򐂰 Subsystem Device Driver download site:
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
򐂰 Download site for Windows SSH freeware:
http://www.chiark.greenend.org.uk/~sgtatham/putty

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

Related publications 917


918 Implementing the IBM System Storage SAN Volume Controller V7.4
Implementing theIBM System Storage SG24-7933-03
SAN Volume Controller V7.4 ISBN 0738440469
(1.5” spine)
1.5”<-> 1.998”
789 <->1051 pages
Back cover

SG24-7933-03

ISBN 0738440469

Printed in U.S.A.

ibm.com/redbooks

You might also like