Professional Documents
Culture Documents
ImplementingSVCv5 1
ImplementingSVCv5 1
ibm.com/redbooks
6423edno.fm
International Technical Support Organization SAN Volume Controller V5.1 October 2009
SG24-6423-07
6423edno.fm
Note: Before using this information and the product it supports, read the information in Notices on page xxi.
Eighth Edition (October 2009) This edition applies to Version 5 Release 1 Modification 0 of the IBM System Storage SAN Volume Controller and is based on pre-GA versions of code. This document created or updated on January 12, 2010. Note: This book is based on a pre-GA version of a product and may not apply when the product becomes generally available. We recommend that you consult the product documentation or follow-on versions of this redbook for more current information.
Copyright International Business Machines Corporation 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
6423edno.fm
6423edno.fm
vi
6423TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii October 2009, Eighth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 What is storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 User requirements that drive storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 5 6
Chapter 2. IBM System Storage SAN Volume Controller overview . . . . . . . . . . . . . . . . 7 2.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 SVC Virtualization Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2 MDisk Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.3 VDisk Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.4 Image Mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.5 Managed Mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.6 Cache Mode and Cache Disabled VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.7 Mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.8 Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.9 VDisk I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.10 iSCSI Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.11 Usage of IP Addresses and Ethernet ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.12 iSCSI VDisk Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.13 iSCSI Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.14 iSCSI Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.15 Advanced Copy Services overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.16 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3 SVC cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.3.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3.4 Cluster management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.5 User Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.3.6 SVC roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.7 SVC local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3.8 SVC remote authentication and Single Sign On . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.4 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.4.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4.2 LAN Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.5 Solid State Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.5.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Copyright IBM Corp. 2009. All rights reserved.
vii
6423TOC.fm
2.5.2 SSD solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 SSD market. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 SSD in the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 SSD configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 SVC 5.1 supported hardware list, device driver and firmware levels . . . . . . . . . . 2.6.3 What was new with SVC 4.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 What is new with SVC 5.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Useful SVC Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Preparing your UPS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Managed Disk Group configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.7 Virtual Disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.8 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.9 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.10 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.11 Data migration from non-virtualized storage subsystem . . . . . . . . . . . . . . . . . . . 3.3.12 SVC configuration back-up procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. SVC initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Systems Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 SSPC hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 SVC installation planning information for SSPC . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 SVC Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 SVC installation planning information for the HMC . . . . . . . . . . . . . . . . . . . . . . . 4.4 SVC Cluster Set Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Creating the cluster (first time) using the service panel . . . . . . . . . . . . . . . . . . . 4.4.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Initial configuration using the service panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Adding the cluster to the SSPC or the SVC HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Secure Shell overview and CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 4.6.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . .
50 50 51 51 54 55 55 57 57 58 63 64 65 66 67 71 71 72 74 78 81 84 85 87 89 90 95 96 96 97 97 97 98 99
101 102 102 105 106 107 108 108 109 109 112 112 114 114 123 125 128
viii
6423TOC.fm
4.6.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Upgrading the SVC Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 FC/SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 IQN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 VDisk discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.5.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Configuring for fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Subsystem Device Driver (SDDPCM or SDD) . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3 . . . . . . . . . . . . . . 5.5.7 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD . . . . . . . . . 5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM . . . . . . . . . . . . . 5.5.10 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . 5.5.12 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.13 Removing an SVC volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.14 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 5.6 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Configuring Windows 2000, Windows 2003, and Windows 2008 hosts . . . . . . . 5.6.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Hardware lists, device driver, HBAs and firmware levels . . . . . . . . . . . . . . . . . . 5.6.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 5.6.6 SDD driver installation on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.7 SDDDSM driver installation on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Discovering the assigned VDisk in Windows 2000 / 2003 . . . . . . . . . . . . . . . . . . . . . 5.7.1 Extending a Windows 2000 or 2003 volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Example configuration of attaching an SVC to a Windows 2008 host . . . . . . . . . . . . 5.8.1 Installing SDDDSM on a Windows 2008 host . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Installing SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.3 Attaching SVC VDisks to Windows 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Extending a Windows 2008 Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.2 System requirements for the IBM System Storage hardware provider . . . . . . .
129 133 135 135 136 140 141 153 154 154 157 158 158 158 158 159 160 162 163 163 163 164 165 168 172 173 173 177 178 178 182 182 183 183 183 183 184 186 186 188 190 194 199 199 202 204 210 210 213 214 215 215
Contents
ix
6423TOC.fm
5.10.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . 5.10.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . 5.10.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Linux (on Intel) specific information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.6 Creating and preparing SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.7 Using the operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . . 5.12 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.12.3 Guest operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.4 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.5 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.6 VMware storage and zoning recommendations . . . . . . . . . . . . . . . . . . . . . . . . 5.12.7 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . 5.12.8 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.9 Attaching VMware to VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.10 VDisk naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.11 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . 5.12.12 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.13 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13 SUN Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.13.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 HP-UX configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.14.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.3 Co-existence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.4 Using an SVC VDisk as a cluster lock disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.5 Support for HP-UX greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.15 Using SDDDSM, SDDPCM, and SDD Web interface . . . . . . . . . . . . . . . . . . . . . . . . 5.16 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.17 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.17.1 IBM Redbook publications containing SVC storage subsystem attachment guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Business requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Moving and migrating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.4 Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.5 Application testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.6 SVC FlashCopy features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 FlashCopy and TSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
215 219 220 221 224 224 224 224 225 225 230 232 232 237 237 237 237 237 238 239 240 241 241 244 245 245 247 248 248 248 249 249 249 249 250 250 250 251 252 252 253 254 254 254 254 255 255 255 256 257 259
6423TOC.fm
6.4 Implementation of SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Interaction and dependency between MTFC . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 6.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.9 FlashCopy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.10 FlashCopy and image mode disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.11 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.12 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.13 Space-efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.14 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.15 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.17 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 6.4.20 Recovering data from FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 SVC Metro Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Multi-Cluster-Mirroring (MCM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.5 Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.6 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.7 How Metro Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.8 Metro Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.9 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.10 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.11 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.12 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 6.5.14 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Creating SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.6 Changing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.10 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.12 Deleting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.14 Reversing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
260 260 261 262 263 264 265 267 267 268 268 269 271 274 275 275 276 276 277 278 278 279 279 280 281 281 285 285 289 290 290 293 295 299 300 300 301 301 301 302 302 303 303 304 304 305 305 305 306 306 306 306 308 xi
6423TOC.fm
6.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Remote copy techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.1 Global Mirror relationship between primary and secondary VDisk . . . . . . . . . . . 6.9.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.3 Dependent writes that span multiple VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.4 Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 How Global Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.1 Intercluster communication and zoning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.2 SVC Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.3 Maintenance of the intercluster link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.4 Distribution of work amongst nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.5 Background Copy Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.6 Space-efficient background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.2 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.3 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.4 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 6.11.6 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.6 Changing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.12 Deleting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.14 Reversing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. SVC operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 7.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.8 Adding MDisks to an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.9 Showing the MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
308 308 308 308 309 311 312 312 312 314 315 315 315 316 316 317 317 317 318 320 323 326 327 327 327 328 331 332 332 333 333 333 334 334 334 335 335 336 336 337 338 338 338 338 339 340 340 341 342 343 343 344
xii
6423TOC.fm
7.2.10 Showing MDisks in an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.11 Working with managed disk groups (MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.12 Creating MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.13 Viewing MDisk group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.14 Renaming an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.15 Deleting an MDisk group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.16 Removing MDisks from an MDisk group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Creating a Fibre Channel attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Creating an iSCSI attached host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Creating a Space efficient VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Adding a mirrored VDisk copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.7 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.9 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.10 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.11 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.12 Showing VDisks-to host-mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.13 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.14 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.15 Migrate a VDisk to an image mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.16 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.17 Showing a VDisk on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.18 Showing VDisks using a MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.19 Showing from what MDisks the VDisk has its extents . . . . . . . . . . . . . . . . . . . 7.4.20 Showing from what MDisk group a VDisk has its extents . . . . . . . . . . . . . . . . . 7.4.21 Showing the host to which the VDisk is mapped to. . . . . . . . . . . . . . . . . . . . . . 7.4.22 Showing the VDisk to which the host is mapped to. . . . . . . . . . . . . . . . . . . . . . 7.4.23 Tracing a VDisk from a host back to its physical disk . . . . . . . . . . . . . . . . . . . . 7.5 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Organizing on screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Managing the cluster using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.3 Cluster Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.7 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.8 Start statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.9 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.10 Status of copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
344 344 345 346 346 347 347 347 348 349 351 352 352 353 354 354 356 356 357 358 361 362 363 364 365 366 367 368 368 369 369 370 371 371 372 373 373 374 375 377 377 377 379 379 379 380 380 383 383 383 385 385 386 xiii
6423TOC.fm
7.7.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.1 Viewing I/O group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.2 Renaming an I/O group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.4 Listing I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.4 Audit Log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.3 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 7.11.6 Preparing (pre-triggering) the FlashCopy consistency group . . . . . . . . . . . . . . 7.11.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.8 Starting (triggering) FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . 7.11.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.11 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.13 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.14 Migrate a VDisk to a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2. . . . . . 7.12.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . 7.12.4 Creating the Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.5 Creating stand-alone Metro Mirror relationship for MM_App_Pri . . . . . . . 7.12.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.7 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . 7.12.11 Stopping a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . 7.12.12 Restarting a Metro Mirror relationship in the Idling state . . . . . . . . . . . . . 7.12.13 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . 7.12.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . 7.12.16 Switching copy direction for a Metro Mirror consistency group . . . . . . . 7.12.17 Creating an SVC partnership between many clusters . . . . . . . . . . . . . . . . . . 7.12.18 Start Configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
SAN Volume Controller V5.1
386 387 387 388 389 390 390 391 391 391 392 393 393 393 395 395 395 396 396 397 398 398 400 401 402 403 403 404 405 406 406 407 411 411 412 413 414 415 416 417 418 419 419 421 421 421 422 423 424 424 425 426 427
6423TOC.fm
7.13 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 7.13.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . . 7.13.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.6 Creating the Stand-alone Global Mirror relationship for GM_App_Pri . . . . . . . 7.13.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 7.13.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 7.13.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 7.13.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 7.13.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 7.13.18 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 7.14 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.4 Set Syslog Event Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.5 Configuring error notification using an email server . . . . . . . . . . . . . . . . . . . . . 7.14.6 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.9 Backing up the SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.10 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.11 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. SVC operations using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Organizing on screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 General housekeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Viewing progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.7 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.8 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.9 Showing a VDisk using a certain MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Working with managed disk groups (MDG). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Viewing MDisk group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
433 434 435 436 438 438 439 440 440 441 441 443 443 444 445 445 446 446 447 448 448 454 457 457 458 458 460 460 464 466 466 466 467 469 470 470 475 475 476 476 477 477 478 479 479 479 480 481 481 482 483 483
Contents
xv
6423TOC.fm
8.3.2 Creating MDisk groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Renaming an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Deleting an MDisk group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 Adding MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Removing MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.7 Displaying MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.8 Showing MDisks in this group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.9 Showing the VDisks associated with an MDisk group . . . . . . . . . . . . . . . . . . . . 8.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Fibre Channel attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 iSCSI attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.7 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.8 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Working with virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Using the Virtual Disks window for VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Creating a space-efficient VDisk with auto-expand. . . . . . . . . . . . . . . . . . . . . . . 8.5.5 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.6 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.7 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.8 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.9 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.10 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.11 Migrating a VDisk to an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.12 Creating a VDisk Mirror from an existing VDisk . . . . . . . . . . . . . . . . . . . . . . . . 8.5.13 Creating a mirrored VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.14 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.15 Creating an image mode mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.16 Migrating to a space-efficient VDisk using VDisk mirroring. . . . . . . . . . . . . . . . 8.5.17 Deleting a VDisk Copy from a VDisk mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.18 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.19 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.20 Showing the MDisks used by a VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.21 Showing the MDG to which a VDisk belongs . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.22 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 8.5.23 Showing capacity information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.24 Showing VDisks mapped to a particular host . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.25 Deleting VDisks from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Working with SSD drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 SSD introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 SVC Advanced operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Organizing on screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.3 Starting the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.4 Stopping the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.5 Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
SAN Volume Controller V5.1
483 486 487 488 489 490 491 492 493 494 495 495 497 499 500 501 502 504 504 504 505 509 513 514 514 515 517 518 519 521 523 526 529 532 534 535 536 537 538 538 538 539 540 540 540 543 543 544 544 544 546 547 548
6423TOC.fm
8.8.6 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.7 Setting the cluster time and configuring NTP server. . . . . . . . . . . . . . . . . . . . . . 8.8.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Manage authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.1 Modify current user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.2 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.3 Modifying a user role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.4 Deleting a user role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.5 User Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.6 Cluster password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.7 Remote authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10 Working with nodes using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.1 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.2 Renaming an I/O group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.3 Adding nodes to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.4 Configuring iSCSI ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.12 FlashCopy operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.2 Preparing (pre-triggering) the FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.3 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.4 Starting (triggering) a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . 8.13.5 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.6 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.7 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.8 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.9 Migration between a fully allocated VDisk and Space-efficient VDisk . . . . . . . 8.13.10 Reversing and splitting a FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . 8.14 Metro Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.2 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.3 Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . 8.14.4 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri . . . . 8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 8.14.7 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.8 Starting a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 8.14.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.11 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.12 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 8.14.13 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.14 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 8.14.15 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . . 8.14.16 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.17 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 8.14.18 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 8.15 Global Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . . 8.15.3 Global Mirror link tolerance and delay simulations . . . . . . . . . . . . . . . . . . . . . . 8.15.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
548 549 550 552 553 554 555 556 557 557 558 558 558 558 559 563 567 567 567 569 573 574 575 576 576 578 579 580 580 582 582 584 585 587 590 594 597 597 598 599 599 600 600 602 603 604 605 606 608 609 609 612 614 xvii
6423TOC.fm
8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri . . . . 8.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 8.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 8.15.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.12 Stopping a stand-alone Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . 8.15.13 Stopping a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.14 Restarting a Global Mirror Relationship in the Idling state . . . . . . . . . . . . . . . 8.15.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 8.15.16 Changing copy direction for Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.17 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 8.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.1 Package numbering and version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.2 Upgrade status utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.3 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.4 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.5 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.6 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.7 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.8 Set Syslog Event Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.9 Set e-mail features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.10 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.11 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.12 Viewing the license settings log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.13 Dumping the cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.14 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.15 Setting up a quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18 Backing up the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.1 Backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.2 Saving the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.3 Restoring the SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.4 Deleting the configuration backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.5 Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.6 CIMOM Log Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Migrating multiple extents (within an MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 9.2.3 Migrating a VDisk between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Migrating the VDisk to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Migrating a VDisk between I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Migrating data from an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Image mode VDisk migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
617 620 624 624 625 626 627 627 628 630 631 632 634 636 636 636 636 637 638 639 645 647 649 651 655 659 662 663 663 666 668 669 670 672 672 672 673 675 676 676 676 677 678 680 680 681 682 682 683 683 685 685
xviii
6423TOC.fm
9.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Windows 2008 host system connected directly to the DS4700 . . . . . . . . . . . . . 9.5.2 SVC added between the host system and the DS4700 . . . . . . . . . . . . . . . . . . . 9.5.3 Put the migrated disks on a Windows 2008 host online . . . . . . . . . . . . . . . . . . . 9.5.4 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . . 9.5.5 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . . 9.5.6 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . . 9.5.7 Free the data from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.8 Put the disks online in Windows 2008 that have been freed from SVC . . . . . . . 9.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.4 Migrate the image mode VDisks to managed MDisks . . . . . . . . . . . . . . . . . . . . 9.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.6 Migrate the VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.4 Migrate the image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.6 Migrate the managed VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . 9.7.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Migrating AIX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.4 Migrate image mode VDisks to VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.6 Migrate the managed VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 Using VDisk Mirroring and Space-Efficient VDisk together. . . . . . . . . . . . . . . . . . . . 9.10.1 Zero detect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.2 VDisk Mirroring With SEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.3 Metro Mirror and SEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automated VDisk creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scripting alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Node replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replacing nodes nondisruptively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expanding an existing SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving VDisks to a new I/O group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replacing nodes disruptively (rezoning the SAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
687 687 688 689 698 700 702 705 709 710 712 714 715 719 722 725 727 728 732 733 735 738 741 744 746 747 750 752 753 758 760 762 765 766 769 770 770 772 778 783 784 785 788 795 797 798 802 804 805
Appendix C. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 807 SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
Contents
xix
6423TOC.fm
Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance data collection and TPC-SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
808 808 808 808 810 813 813 813 814 814 814
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
xx
6423spec.fm
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
xxi
6423spec.fm
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX developerWorks DS4000 DS6000 DS8000 Enterprise Storage Server FlashCopy GPFS IBM Systems Director Active Energy Manager IBM Power Systems Redbooks Redpaper Redbooks (logo) Solid System i System p System Storage System Storage DS System x Tivoli TotalStorage WebSphere XIV
The following terms are trademarks of other companies: Emulex, and the Emulex logo are trademarks or registered trademarks of Emulex Corporation. Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and other countries. QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered trademark in the United States. ACS, Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S. and other countries. VMotion, VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xxii
6423chang.fm
Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-6423-07 for SAN Volume Controller V5.1 as created or updated on December 21, 2009.
New information
Added iSCSI information Added Solid State Drive information
Changed information
Removed duplicate information Consolidated chapters Removed dated material
xxiii
6423chang.fm
xxiv
6423pref.fm
Preface
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC), a virtualization appliance solution that maps virtualized volumes visible to hosts and applications to physical volumes on storage devices. Each server within the SAN has its own set of virtual storage addresses, which are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. This means that volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves management of information at the block level in a network, enabling applications and servers to share storage devices on a network.
xxv
6423pref.fm
There are many people that contributed to this book. In particular, we thank the development and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues along and ensuring that they maintained a high profile. In particular, we thank the previous authors of this redbook: Matt Amanat Angelo Bernasconi Steve Cody Sean Crawford Sameer Dhulekar Katja Gebuhr Deon George Amarnath Hiriyannappa Thorsten Hoss Juerg Hossli Philippe Jachimczyk Kamalakkannan J Jayaraman Dan Koeck Bent Lerager Craig McKenna Andy McManus Joao Marcos Leite Barry Mellish Suad Musovich Massimo Rosati Fred Scholten Robert Symons Marcus Thordal Xiao Peng Zhao We would also like to thank the following people for their contributions to previous editions and to those that contributed to this edition: John Agombar Alex Ainscow Trevor Boardman Chris Canto Peter Eccles Carlos Fuente Alex Howell Colin Jewell Paul Mason Paul Merrison Jon Parkes Steve Randle Lucy Raw Bill Scales Dave Sinclair Matt Smith Steve White Barry Whyte IBM Hursley Bill Wiegand IBM Advanced Technical Support
xxvi
6423pref.fm
Dorothy Faurot IBM Raleigh Sharon Wang IBM Chicago Chris Saul IBM San Jose Sangam Racherla IBM ITSO A special mention must go to Brocade for their unparalleled support of this residency in terms of equipment and support in many areas throughout. Namely: Jim Baldyga Yong Choi Silviano Gaona Brian Steffler Steven Tong Brocade Communications Systems
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an e-mail to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099
Preface
xxvii
6423pref.fm
xxviii
6423ch01.Introduction Werner.fm
Chapter 1.
6423ch01.Introduction Werner.fm
6423ch01.Introduction Werner.fm
The key concept of virtualization is to de-couple the storage (as delivered by commodity two way RAID controllers attaching physical disk drives) from the storage functions that are expected from servers in a todays SAN environment. De-coupling is achieved by abstracting the physical location of data from the logical representation that an application on a server sees. The basic task of the virtualization engine is to present logical entities, i.e a volume, to the user and manage internally the process of mapping it to the actual physical location. How this mapping is realized depends on the specific implementation. Another implementation specific issue is the granularity of the mapping. It can range from a small fraction of a physical disk, up to the full capacity of a single physical disk. A single block of information in such an environment is identified by its logical unit identifier (LUN), i.e the physical disk, and an offset within that LUN - known as a Logical Block Address (LBA). Keep in mind, that the term physical disk used in this context describes a piece of storage that might have been carved out of a RAID array in the underlying disk subsystem. Mapping the address space is done between the logical entity, usually referred to as a virtual disk (VDisk) and the physical disks identified by its LUN. These LUNs provided by the storage controllers to the virtualization layer will be referred to as MDisk throughout this book. An overview on block level virtualization is given in Figure 1-2 on page 4.
6423ch01.Introduction Werner.fm
So what does this mean? Well, this means that the server and application only know about logical entities and access these via a consistent interface provided by the virtualization layer. Each logical entity owns a common and well defined set of functionality independent of where the physical representation is located. The functionality of a VDisk presented to a server, such as expanding/reducing the size of a VDisk, mirror a VDisk to secondary site, create a FlashCopy/Snapshot, Thin Provisioning/Over-allocation and so on, is implemented in the virtualization layer and does not rely in any way on the functionality provided by the disk subsystems delivering the MDisks. Data stored in a virtualized environment is stored in a location independent way. This will allow a user to just move or migrate its data, or parts of it, to a different place or storage pool, i.e the place that the data really belongs to. The logical entity can be resized, moved, replaced, replicated, over-allocated, mirrored, migrated, and so on, without any disruption to the server and application. Once you have an abstraction layer in the SAN, you can do just about anything. When we think of block level storage virtualization, we see a system that must provide what we have chosen to call the Cornerstones of Virtualization. Quite simply, these are the set of core advantages a product such as the SVC can provide over traditional direct attached SAN storage namely: 1. Online volume migration while applications are running. This is possibly the killer app for storage virtualization. It enables you to put your data where it belongs, and, if the requirements are changing over time, to move it to the right place or storage pool without impacting your server or application. Implementation of a tiered storage environment providing different storage classes for information life-cycle management (ILM), balancing 4
SAN Volume Controller V5.1
6423ch01.Introduction Werner.fm
I/O across controllers, adding, upgrading and retiring storage, that is to say it allows you to put your data where it really belongs. 2. Simplification of storage management by providing a single image for multiple controllers and a consistent user interface for provisioning heterogeneous storage. (After some initial array setup of course). 3. Enterprise level copy services for existing storage. The customer can license a function once and use it everywhere. New storage can be purchased as low-cost RAID bricks. (The source and target of a copy relationship can be on different controllers). 4. Increased storage utilization by pooling storage across the SAN. 5. The potential to increase system performance by reducing hot spots, striping disks across many arrays and controllers - and in some implementations providing additional caching. The ability to deliver these functions in a homogeneous way on a scalable and highly available platform, over any attached storage and to every attached server are the key challenges for every block level virtualization solution.
6423ch01.Introduction Werner.fm
Also advanced functions such as data mirroring or FlashCopy are provided by the virtualization layer meaning that there is no need to purchase them again for each new disk subsystem.Whereby Today, it is typical that open systems run at way less than 50% usable capacity the RAID disk subsystems provide. Doing the math using the installed raw capacity in the disk subsystems will, dependent on the RAID level used, show utilization numbers of less than 35%. A block level virtualization solution such as the SVC will support you to increase that to something like 75 or 80%. With the SVC there is no need to keep and manage freespace in every single disk subsystem. You do not need to worry so much as to whether there is sufficient freespace on the right storage tier, or in a single box. Even if there is enough freespace in one single box it might not be accessible in a non-virtualized environment for a specific server/application due to multipath driver issues. The SVC is able to handle the storage resources it manages as one single storage pool. Disk space allocation from this pool is a matter of minutes for every server connected to the SVC as you just provision the capacity as needed, without disrupting applications.
1.3 Conclusion
Storage virtualization is no longer a concept or even a bleeding-edge technology. All major storage vendors offer storage virtualization products. Making use of storage virtualization as the foundation for flexible and reliable storage solution will help a company better align business and IT by optimizing the storage infrastructure and management with business demands. The IBM System Storage SAN Volume Controller is a mature, fifth generation virtualization solution which uses open standards and is consistent with the SNIA (Storage Networking Industry Association) storage model. It is realized as an appliance based in-band block virtualization process, in which intelligence, including advanced storage functions, is migrated from individual storage devices to the storage network. We expect the use of SVC will improve the utilization of storage resources, simplify the storage management and last but not least will improve the availability of your applications.
Chapter 2.
2.1 History
IBMs implementation of block level storage virtualization, the IBM System Storage SAN Volume Controller (SVC), has its roots in an IBM project that was initiated in the second half of 1999 at the IBM Almaden Research Center. The project was called COMPASS (COMmodity PArts Storage System). One of its goals was to build a system almost exclusively built from off-the-shelf standard parts. As any enterprise level storage control system, it had to deliver high performance and availability, comparable to the highly optimized storage controllers of previous generations. The idea of building a storage control system based on a scalable cluster of lower-performance Pentium-based servers, instead of a monolithic architecture of two nodes, is still a compelling one. COMPASS had to also address a major challenge for the heterogeneous open systems environment, namely to reduce the complexity of managing storage on block devices. The first publications covering this project were released to the public in 2003 in the form of the IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003, The architecture of a SAN storage control system, by J. S. Glider, C. F. Fuente, W. J. Scales. It can be found at: http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066 919c/b97a551f7e510eff85256d660078a12e?OpenDocument The results of the COMPASS project defined the fundamentals for the product architecture. The announcement of the first release of the IBM System Storage SAN Volume Controller took place in July 2003. The following releases brought new more powerful hardware nodes which approximately doubled the I/O performance and throughput of its predecessors, new functionality and additional interoperability with new elements in host environments and disk subsystems and the SAN. Major steps in the products evolution were: SVC Release 2, February 2005. SVC Release 3. October 2005 With new 8F2 node hardware (based on IBM X336, 8GB cache, 4 * 2 Gb FC port) SVC Release 4.1, May 2006 With new 8F4 node hardware (based on IBM X336, 8GB cache, 4* 4Gb FC port) SVC Release 4.2. May 2007 New 8A4 entry level node hardware, (based on IBM X3250, 8GB cache, 4* 4Gb FC port) New 8G4 node hardware, (based on IBM X3550, 8GB cache, 4* 4Gb FC port) SVC Release 4.3, May 2008 In 2008 the 15,000th SVC engine was shipped by IBM. More than 5000 SVC systems worldwide are in operation. With the new release of SVC introduced in this book we will get a new generation of hardware nodes. This hardware that will approximately double the performance of its predecessors will also provide Solid State Disk (SSD) support. New software features are iSCSI support (which will be available on all hardware nodes that support the new firmware and multiple SVC partnerships which will support data replication between the members of a group of up to 4 SVC clusters.
While all of these approaches provide in essence the same basic 'Cornerstones of Virtualization' there are some interesting side effects with some or all approaches.
All three can provide the required functionality, although, when it comes to the implementation (especially the switch based split I/O architecture) makes it more difficult to implement some of the required functionality. This is especially true for the FlashCopy services. Taking a point in time clone of a device in a split I/O architecture will mean that all the data has to be copied from the source to the target first. The drawback is that the target copy cannot be brought online until the entire copy has completed, i.e minutes or hours later. Think of using this for implementing a 'sparse flash, i.e a flash copy without background copy where the target disk is only populated with the blocks or 'extents' that are modified after the point in time the flash copy was taken, or an 'incremental' series of 'cascaded' copies. Scalability is anther issue here as it may be difficult to try to scale out to n-way clusters of intelligent line cards. A multi-way switch design is also very difficult to code and implement because of the issues in maintaining fast updates to meta-data to keep the meta-data synchronized across all processing blades - this has to be done at wire speed or you lose that claim. For the same reason, space-efficient copies and replication are also difficult to implement. Both synchronous and asynchronous replication require some level of buffering of I/O requests - while switches do have buffering built in, the number of additional buffers would be huge and grows as the link distance increases. Most of todays intelligent line cards do not provide anywhere near this level of local storage. The most common solution is to use an external box to provide the replication services, which means another box to manage and maintain which is in a different direction to the concepts of virtualization. Also to be kept in mind when choosing a split I/O architecture is the fact that your virtualization implementation will be locked to the actual switch type and hardware you are using. This will it make it very hard to implement any future changes. The controller based approach does well with respect to the functionality - but fails when it comes to scalability or upgradability. This is caused by the nature of its design as there is no true decoupling with this approach. This will become an issue when you have to life-cycle such a solution, i.e such a controller. You will be challenged with data migration issues and questions like how to reconnect the servers to the new controller, and how to do it online without any impact to your applications? Be aware that you will not only be replacing a controller in such a scenario, but also, implicitly, replacing your entire virtualization solution. You will not only have to replace your hardware but also to update/re-purchase the licences for the virtualization feature, advanced copy functions, and so on. With a network appliance solution based on a scale-out cluster architecture life-cycle management, tasks like adding or replacing new disk subsystems or migrating data between them are simplified to the extreme. Servers and applications remain online, data migration takes place transparently on the virtualization platform and licences for virtualization and copy services require no update, i.e cause no additional costs when disk subsystems have to be replaced. Only the network based appliance solution provides you an independent and scalable virtualization platform than can provide enterprise class copy services, is open for future interfaces and protocols, let you choose the disk subsystems that best fits your requirements, and last but not least, doesnt lock you in to specific SAN hardware. These are some of the reasons why IBM has chosen the network based appliance approach for the implementation of the IBM System Storage SAN Volume Controller.
10
The key characteristics of the SVC are: 1. Highly scalable - easy growth path to 2n nodes, i.e grow in a pair of nodes 2. SAN interface independent - actually supports Fibre Channel and iSCSI, but is also open for future enhancements such as Infiniband or others 3. Host independent - for fixed block based Open Systems environments 4. Storage (RAID controller) independent - ongoing plan to qualify additional types of RAID controller 5. Able to utilize commodity RAID controllers - so called Low Complexity RAID Bricks 6. Able to utilize node internal disk, i.e Solid State Disks On the SAN storage provided by the disk subsystems the SVC can offer the following services: 1. Create and manage a single pool of storage attached to the SAN 2. Block level virtualization, i.e Logical Unit virtualization 3. Provide advanced functions to the entire SAN, such as: Large scalable cache Advanced Copy Services such as: FlashCopy (Point in Time Copy) Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous) Data Migration
For future releases this feature list will grow. This additional layer could provide future features such as policy based space management mapping your storage resources based on desired performance characteristics, or dynamic re-allocation of entire VDisks or parts of it according to user definable performance policies. As mentioned before, as soon as you have the decoupling properly done, i.e installed an additional layer between server and the storage, everything is possible. SAN based storage infrastructures using SVC may be configured with two or more SVC nodes, arranged in a cluster. These would be attached to the SAN fabric, along with RAID controllers and host systems. The SAN fabric would be zoned to allow the SVC to see the RAID controllers, and for the hosts to see the SVC. The hosts would not usually be able to directly see or operate on the RAID controllers unless a split controller configuration is in use. The zoning capabilities of the SAN switch could be used to create these distinct zones. The assumptions made about the SAN fabric will be limited to make it possible to support a number of different SAN fabrics with minimum development effort. Anticipated SAN fabrics include Fibre Channel, iSCSI over Gigabit Ethernet and others might follow in the future. Figure 2-2 shows a conceptual diagram of a storage system utilizing the SVC. It shows a number of hosts connected to a SAN fabric or LAN. In practical implementations which have high availability requirements (the majority of the target customers for SVC) the SAN fabric cloud represents a redundant SAN comprising a fault tolerant arrangement of two or more counterpart SANs providing alternate paths for each SAN attached device. For iSCSI /LAN based access networks to the SVC both scenarios, i.e using a single network or using two physical separated networks, are supported. Redundant paths to VDisks can be provided for both scenarios.
11
A cluster of SVC nodes are connected to the same fabric and present VDisks to the hosts. These virtual disks are created from MDisks presented by the RAID controllers. There are two distinct zones shown in the fabric - a host zone, in which the hosts can see and address the SVC nodes, and a storage zone, in which the SVC nodes can see and address the MDisk/LUNs presented by the RAID controllers. Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens through the SVC nodes. This is commonly described as symmetric virtualization. Figure 2-3 on page 13 shows the SVC logical topology.
12
For the sake of simplicity, Figure 2-3 shows only one SAN fabric and two types of zones. as already mentioned above, in a real environment the recommendation is to use two redundant SAN fabrics. The SVC can be connected to up to four fabrics. Zoning has to be done per host, per disk subsystem and per fabric. Zoning details can be found in 3.3.2, SAN zoning and SAN connections on page 74. For iSCSI based access using two separate networks and separating iSCSI traffic within the networks by using dedicated VLANs path for storage traffic, will prevent any IP interface, switch or target port failure from compromising the host servers access to the VDisks LUNs.
13
When a host server performs I/O to one of its VDisks all the I/Os for a specific VDisk are directed to one specific I/O Group in the cluster. During normal operating conditions, the I/Os for a specific VDisk are always processed by the same node of the I/O Group. This node is referred to as the preferred node for this specific VDisk. Both nodes of an I/O Group act as preferred nodes each for its specific subset of the total number of VDisks the I/O Group is presenting to the host servers. But both nodes act also as failover node for their specific partner node in the I/O Group. They will takeover the I/O handling from its partner node if required. In an SVC based environment the I/O handling for a VDisk can switch between the two nodes of an I/O Group, therefore it is mandatory for servers connected via Fibre Channel, to use Multipath Drivers in order to be able to handle such failover situations. SVC 5.1 introduces iSCSI as an alternative means of attaching hosts. However, all communications with back-end storage subsystems, and with other SVC clusters, is still via Fibre Channel. The node failover can be handled without a multipath driver installed on the server. An iSCSI attached server may simply reconnect after a node failover to the original target IP address which is now presented by the partner node. To protect the server against link failures in the network or HBA failures, a multipath driver is mandatory. The SVC I/O Groups are connected to the SAN in such a way that all application servers accessing VDisks from this I/O Group have access to this group. Up to 256 host server objects can be defined per I/O Group, that is to say they can consume VDisks provided by this specific I/O Group. If required, host servers can be mapped to more than one I/O Group of an SVC cluster, that is to say accessing VDisks from different I/O groups. VDisks can be moved between I/O Groups to redistribute load between the I/O Groups. With the current release of SVC, I/Os to the VDisk being moved have to be quiesced for a short time for the duration of the move. The SVC cluster, its I/O Groups, view the storage presented to the SAN by the backend controllers as a number of disks, known as Managed Disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the backend controllers, an MDisk is usually, but not necessarily, provisioned from a RAID array. The application servers on the other hand do not see the MDisks at all. Instead, they see a number of logical disks, known as Virtual Disks or VDisks, which are presented by the SVC I/O Groups via the SAN (FC) or LAN (iSCSI) to the servers. A VDisk is storage, provisioned out of one, or if its a mirrored VDisk, of two Managed Disk Groups or MDGs. An MDG is a collection of up to 128 MDisks creating the storage pools that VDisks are provisioned out of. A single cluster can manage up to 128 MDGs. The size of these pools can be changed at runtime (expanded/shrunk), without any need for taking the MDG or the VDisks provided by it offline. At any point in time an MDisk can only be a member in one MDG with one exception (image mode VDisk) which will be explained later in this chapter. MDisks used in a specific MDG should have the following characteristics: They should bear the same hardware characteristics, for example RAID type, RAID array size, disk type, disk RPM. Be aware, its always the weakest element (MDisk) in a chain that defines the maximum strength of that chain (MDG). The disk subsystems providing the MDisks should have similar characteristics for example maximum IOPS, response time, cache, and throughput. Use an MDisk of the same size (recommended), use an MDisk that provides the same number of extents (recommended) and this needs to be kept in mind when you are adding MDisks to an existing MDG. If that is not feasible, check the distribution of the VDisks extents in that MDG.
14
For further details, refer to the Redbook SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521 and it can be found here: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open VDisks might be mapped to a host in order to allow access for a specific server to a set of VDisks. A host within the SVC is a collection of HBA WWPNs or iSCSI names (IQN), defined on the specific server. Note that iSCSI names are internally identified by fake WWPNs, i.e WWPNs generated by the SVC itself. VDisks might be mapped to multiple hosts, e.g VDisk accessed by multiple hosts of a server cluster. How these different entities belong together is shown in Figure 2-4.
An MDisk can be provided by a SAN disk subsystem or by the solid state disks (SSD) provided by the SVC nodes themselves. Each MDisk is divided up into a number of extents. The size of the extent will be selected by the user at creation time of an MDG. It ranges from 16MB (default) up to 2GB. The recommendation is to use the same extent size for all MDGs in a cluster, as this is a prerequisite for supporting VDisk migration between two MDGs. If the extent size doesnt fit you have to use VDisk mirroring (see 2.2.7, Mirrored VDisk on page 21) as a workaround. For just copying (not migrate) the data into another MDG to a new VDisk, SVC Advanced Copy Services can be used. The two most popular ways of how VDisks can be provisioned out of an MDG is shown in Figure 2-5. Striped Mode is the recommended one for most cases. Sequential extent allocation mode may slightly increase sequential performance for some workloads.
15
Extents for a VDisk can be allocated in many different ways. The process is under full user control at VDisk creation time and can be changed at any time by migrating single extents of a VDisk to another MDisk within the MDG. Details of how to create VDisks and migrate extents via GUI or CLI can be found in Chapter 7, SVC operations using the CLI on page 337, Chapter 8, SVC operations using the GUI on page 469, and Chapter 9, Data migration on page 675. SVC limits the number of extents in a cluster. The number is currently 222 ~= 4 million extents and this number may change in future releases. Because the number of addressable extents is limited, the total capacity an SVC cluster depends on the extent size chosen by the user. Assuming all defined MDGs have been created with the same extent size we get the capacity numbers specified in Table 2-1 for an SVC cluster.
Table 2-1 zExtent Size to Addressability matrix Extent Size Maximum 16MB 32MB 64 MB 128MB Cluster Capacity 64TB 128TB 256TB 512TB Extent Size Maximum 256MB 512MB 1024MB 2048MB Cluster Capacity 1PB 2PB 4PB 8PB
Most of today clusters will be fine with 1 - 2 PB capacity. We therefore recommend to use 256MB or, for larger clusters, 512 MB as the standard extent size.
1. Unmanaged MDisk An MDisk is reported as Unmanaged when it is not a member of any Managed Disk Group. An Unmanaged MDisk is not associated with any VDisks and has no meta data stored on it. SVC will not write to an MDisk which is in Unmanaged Mode except when it attempts to change the mode of the MDisk to one of the other modes. SVC can see the resource, but it is not assigned to a pool, i.e an MDG. 2. Managed MDisk Managed Mode MDisks are always members of an MDG and contribute extents to the pool of extents available in the Managed Disk Group. Zero or more VDisks (if not operated in image mode, see below) may use these extents. MDisks operating in Managed mode may have meta data extents allocated from them and may be used as Quorum disks. 3. Image Mode MDisk Image Mode provides a direct block-for-block translation from the MDisk to the Virtual Disk with using virtualization. This mode is provided to satisfy three main usage scenarios: a. To allow virtualization of Managed Disks which already contain data which was written directly, not through an SVC. It allows a customer to insert the SVC into the data path of an existing storage configuration with minimal downtime. Details of the Data migration process is given in Chapter 9, Data migration on page 675. b. To allow a virtual disk managed by the SVC to be used with the copy services provided by the underlying RAID controller. In order to avoid loss of data integrity when the SVC is used in this way it is important that the SVC cache be disabled for the VDisk. c. SVC provides the ability to migrate to image mode. This allows the SVC to export VDisks and access them directly without SVC from the server. An Image Mode MDisk is associated with exactly one VDisk. The last extent is partial if the (image mode) MDisk is not a multiple of the MDisk Groups extent size (see Figure 2-6 on page 18). An image mode VDisk is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and will not have any SVC meta data extents allocated on it. Managed or image mode MDisk are always members of a MDG.
17
It is a best practice if you work with image mode MDisks to put them in a dedicated MDG and use a special naming for it (Example: MDG_IMG_XXX). And bear in mind that the extent size chosen for this specific MDG has to fit the one you plan to migrate the data in. All of SVC copy services can be applied to Image Mode disks.
Doesn't exist
delete vdisk
Managed mode
delete vdisk
complete migrate
Image mode
migrate to image mode
Managed Mode VDisks have two policies which are sequential policy and striped policy. Policies define how extents of a VDisk are carved out of an MDG.
19
maps to them. These unused extents are available for use in creating new VDisks, migration, expansion and so on.
A Managed Mode VDisk can have a size of zero blocks, in which case it occupies zero extents. Such a VDisk cannot be mapped to host or take part in any Advanced Copy Services functions. The allocation of a specific number of extents from a specific set of Managed Disks is performed by the following algorithm: where the set of MDisks to allocate extents from contains more than one disk, extents are allocated from MDisks in a round robin fashion. If a MDisk has no free extents when its turn arrives then its turn is missed and the round robin moves to the next MDisk in the set which has a free extent. Beginning with SVC 5.1, when creating a new VDisk the first MDisk to allocate an extent from is chosen in a pseudo random way rather than simply choosing the next disk in a round robin fashion. The pseudo random algorithm avoids the situation whereby the striping effect inherent in a round robin algorithm places the first extent for a large number of VDisks on the same MDisk. Placing the first extent of a number of VDisks on the same MDisk could lead to poor performance for workloads which place a large I/O load on the first extent of each VDisk or which create multiple sequential streams.
20
Wherever possible, we recommend using SVC copy services in preference to the underlying controller copy services.
A copy may be added to a VDisk with only one copy, or removed from a VDisk with two. Checks will prevent accidental removal the sole copy of a VDisk. A newly-created, unformatted VDisk with two copies will initially have the copies out-of-synchronization. The primary copy will be defined as fresh and the secondary copy as stale. The synchronization process will update the secondary copy until it is synchronized. This will be
Chapter 2. IBM System Storage SAN Volume Controller overview
21
done at the default synchronization rate or one defined when creating the VDisk or subsequently modifying it. If a two-copy Mirrored VDisk is created with the format parameter, both copies are formatted in parallel and the VDisk comes online when both operations are complete with the copies in-sync. If Mirrored VDisks get expanded or shrunk all of their copies also get expanded or shrunk. If it is known that MDisk space which will be used for creating copies is already formatted, or if the user does not require read stability, a no synchronization option can be selected which declares the copies as synchronized (even when they are not) The time for a copy which has become un-synchronized to re-synchronize is minimized by copying only those 256KB grains which have been written to since synchronization was lost. This is known a incremental synchronization. Only those changed grains need be copied to restore synchronization. Important: An unmirrored VDisk may be migrated from a source to a destination by adding a copy at the desired destination, waiting for the two copies to synchronize, and then removing the original copy. This operation may be stopped at any time.The two copies can be in different MDisk Groups with different extent sizes. Where there are two copies of a VDisk, one is known as the primary copy. If the primary is available and synchronized, reads from the VDisk are directed to it. The user may select the primary when creating the VDisk or may change it later. Selecting the copy allocated on the higher-performance controller will maximize the read performance of the VDisk. The write performance will be constrained by the lower-performance controller because writes must complete to both copies before the VDisk is considered to have been successfully written. This has to be kept in mind when VDisk Mirroring created with one copy in an SSD MDG and and the second one in an MDG populated with resources from a disk subsystem. Note: SVC doesnt prevent you from creating the two copies in one or more SSD MDGs of the same node. Though doing so means that you lose redundancy and could therefore be faced with access loss to your VDisk if the node fails or restarts. A VDisk with copies may be checked to see whether all copies are identical. If a medium error is encountered whilst reading from any copy, it will be repaired using data from another fresh copy. This process may be asynchronous but will give up if the copy with the error goes offline. Mirrored VDisks consume bitmap space at a rate of 1 bit per 256KB grain. This translates to 1 MB of bitmap space supporting 2 TB-worth of Mirrored VDisk. The default allocation of bitmap space in 20 MB which supports 40 TB of Mirrored VDisk. If all 512 MB of variable bitmap space is allocated to Mirrored VDisks, 1 PB of Mirrored VDisks can be supported. The advent of the Mirrored VDisk feature will inevitably lead customers to think about two-site solutions for cluster and VDisk availability. Generally the advice is not to split a cluster, i.e the single I/O Groups, across sites. But there are some configurations that will be effective. Special care has to be taken into account to prevent a situation than is referred to as a split brain scenario (caused, for example, by a power glitch on the SAN switches; the SVC nodes are protected by their own UPS) where the connectivity between components will be lost and a contest for the SVC cluster quorum disk occurs. Which set of nodes wins is effectively arbitrary. If the set of nodes which won the 22
SAN Volume Controller V5.1
quorum disk then experiences a permanent power loss, the cluster is lost. The way to prevent this is to use a configuration that will provide effective redundancy because of the exact placement of system components in fault domains. The details of such a configuration and the required prerequisites can be found in Chapter 3, Planning and configuration on page 63.
23
SE VDisks store both user data and meta data. Each grain requires meta data. The overhead will never be > 0.1% of the user data. The overhead is independent of the virtual capacity of the SE VDisk. If you are using SE VDisks in a FlashCopy map, use the same grain size as the map grain size for best performance. If you are using the Space-Efficient VDisk directly with a host system, use a small grain size. Note: SE VDisks need no formatting. A read I/O which requests data from unallocated data space will return zeroes. When a write I/O causes space to be allocated the grain will be zeroed prior to use. Consequently, a Space-Efficient VDisk will always be formatted regardless of whether the format flag is specified when the VDisk is created.The formatting flag will be ignored when a Space-Efficient VDisk is created or the real capacity is expanded the virtualization component will never format the real capacity for a Space-Efficient VDisk. The real capacity of an SE VDisk can be changed provided that the VDisk is not in Image Mode. Increasing the real capacity allows a larger amount of data and metadata to be stored on the VDisk. SE VDisks use the real capacity of a VDisk in ascending order as new data is written to the VDisk. Consequently if the user initially assigns too much real capacity to a SE VDisk the real capacity can be reduced to free up storage for other uses. It is not possible to reduce the real capacity of a SE VDisk to be less than the capacity that is currently in use other than by deleting the VDisk. An SE VDisk can be configured to autoexpand. This causes SVC to automatically expand the real capacity of an SE VDisk as its real capacity is used. Autoexpand attempts to maintain a fixed amount of unused real capacity on the VDisk. This amount is known as the contingency capacity.
24
The contingency capacity is initially set to the real capacity assigned when the VDisk is created. If the user modifies the real capacity then the contingency capacity is reset to be the difference between the used capacity and real capacity. If a VDisk is created with a zero contingency capacity will go offline as soon as they need to expand whereas VDisks with a non-zero contingency capacity will stay online until it has been used up. Autoexpand will not cause space to be assigned to the VDisk that can never be used. In practice this means autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The real capacity may be manually expanded to more than the maximum required by the current virtual capacity and the contingency capacity will be recalculated as described previously. To support the auto-expanding of Space-Efficient VDisks, the MDisk Groups from which they are allocated have a configurable warning capacity. When the used free capacity of the group exceeds the warning capacity, a warning is logged. To allow for capacity used by quorum disks and partial extents of image-mode vdisks, the calculation uses the free capacity. For example, if a warning of 80% has been specified, the warning will be logged when 20% of the free capacity remains. Note: Space-Efficient VDisks require additional I/O operations to read and write metadata to backend storage and generate additional load on the SVC nodes. We therefore do not recommend the use of SE VDisks for high performance applications. An SE VDisk can be converted to a fully allocated VDisk using VDisk Mirroring. SVC 5.1.0 introduces the ability to convert a fully-allocated VDisk to a SE VDisk, by the following procedure: 1. Start with a VDisk having one fully-allocated copy. 2. Add a Space-Efficient copy to the VDisk. 3. Allow VDisk Mirroring to synchronize the copies. 4. Remove the fully-allocated copy. This is achieved by the use of a zero-detection algorithm. Note that as of 5.1.0, this algorithm is used only for I/O generated by synchronization of Mirrored VDisks; I/O from other components (e.g. FlashCopy) is written as normal. Note: Consider Space-Efficient VDisk as targets in Flash Copy relationships. Using them as a target in Metro Mirror or Global Mirror relationships makes no sense, because during the initial synchronization the target will become fully allocated.
25
but will not pay for I/Os beyond a certain rate). Only commands that access the medium (Read (6/10), Write (6/10), Write and Verify) are subject to I/O governing. Note: I/O governing is applied to remote copy secondaries as well as primaries. This means that if an I/O governing rate has been set on a VDisk which is a remote copy secondary then this governing rate will also be applied to the primary. If governing is in use on both the primary and the secondary VDisks, each governed quantity will be limited to the lower of the two values specified. Governing has no effect on FlashCopy or data migration I/O. An I/O budget is expressed as a number of I/Os, or a number of MBs, over a minute. The budget is evenly divided between all SVC nodes that service that VDisk, i.e. between the nodes which form the I/O Group of which that VDisk is a member. The algorithm operates two levels of policing. While a VDisk on each SVC node has been receiving I/O at a rate below the governed level, then no governing is performed. A check is made every minute that the VDisk on each node is continuing to receive I/O at below the threshold level. Where this check shows that the host has exceeded its limit on one or more nodes, then policing begins for new I/Os. While policing is in force: A budget allowance is calculated for a 1 second period. I/Os are counted over a period of a second. If I/Os are received in excess of the one second budget on any node in the I/O Group, then those and later I/Os are pended. When the second expires, a new budget is established, and any pended I/Os are re-driven under the new budget. This algorithm may cause I/O to backlog in the front end, which might eventually cause Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a host stays within its 1 second budget on all nodes in the I/O Group for a period of 1 minute, then the policing is relaxed, and monitoring takes place over the 1 minute period as before.
26
into the Command Descriptor Block (CDB). The server executes a command and completion is indicated by a special signal alert. Encapsulation and reliable delivery of CDB transactions between initiators and targets through the TCP/IP network, especially over a potentially unreliable IP network, is the main function of iSCSI. The concepts of names and addresses have been carefully separated in iSCSI: An iSCSI Name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stay constant for the life of the node. The terms "initiator name" and "target name" also refer to an iSCSI name. An iSCSI Address specifies not only the iSCSI name of an iSCSI node, but also a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI Name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned via DHCP. An SVC node represents an iSCSI node and provides statically allocated IP addresses. Each iSCSI node, i.e. an Initiator or Target, has a unique iSCSI Qualified Name (IQN), that can have a size of up to 255 bytes. The IQN is formed according to the rules adopted for Internet nodes. The iSCSI qualified name format is defined in RFC3720 and contains (in order): The string "iqn." A date code specifying the year and month in which the organization registered the domain or sub-domain name used as the naming authority string. The organizational naming authority string, which consists of a valid, reversed domain or subdomain name. Optionally, a ':', followed by a string of the assigning organization's choosing, which must make each assigned iSCSI name unique. For SVC the IQN for its iSCSI Target is specified as: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> On a Windows server the IQN, i.e the name for the iSCSI Initiator may be defined as: iqn.1991-05.com.microsoft:<computer name> IQNs may be abbreviated by a descriptive name, known as an alias. An alias can be assigned to an initiator or target. The alias is independent of the name, and does not have to be unique. Since it is not unique, the alias must be used in a purely informational way. It may not be used to specify a target at login, or used during authentication. Both targets and initiators may have aliases. An iSCSI name provides the correct identification of an iSCSI device irrespective of its physical location. Remember, the IQN is an identifier, not an address. Note: Before changing cluster or nodenames for an SVC cluster that has servers connected to it via SCSI, be aware that because the cluster and nodename are part of the SVCs IQN you could lose access to your data by changing these names. The SVC GUI will display a specific warning, the CLI does not. The iSCSI session consists of a Login Phase and a Full Feature Phase which is completed with a special command.
27
The Login Phase of the iSCSI is identical to the Fibre Channel Port Login process (PLOGI). It is used to adjust various parameters between two network entities and confirm the access rights of an Initiator. If the iSCSI Login Phase is completed successfully the target confirms the login for the Initiator; otherwise, the login is not confirmed and the TCP connection breaks. As soon as the login is confirmed the iSCSI session enter Full Feature Phase. If more than one TCP connection was established, iSCSI requires that each command/response pair goes through one TCP connection. Thus, each separate read or write command will be carried out without the necessity to trace each request for passing different flows. However, different transactions can be delivered through different TCP connections within one session. An overview of the different block-level storage protocols and where the iSCSI layer is positioned is shown in Figure 2-11 on page 28.
An introduction to standards and definitions used for iSCSI, including the relevant IETF/RFCs can be found at: http://en.wikipedia.org/wiki/ISCSI/
28
A port IP address is used to perform iSCSI I/O to the cluster. Each node may have a port IP address for each of its ports. In the case of an upgrade to the SVC 5.1 code, the original cluster IP address will be retained and will always be found on the eth0 interface on the configuration node. A second, new cluster IP address may be optionally configured in SVC 5.1. This will always be on the eth1 interface on the configuration node. When the configuration node fails, both configuration IP addresses will move to the new configuration node. An overview of the new IP addresses on an SVC node port and the rules how these IP addresses are moved between the nodes of an I/O Group is given in Figure 2-12 on page 29. The management IP addresses and the ISCSI target IP addresses will failover to the partner node N2 if node N1 restarts (and vice versa). The ISCSI target IPs will failback to their corresponding ports on node N1 when node N1 is up again.
In an SVC cluster running 5.1 code, an eight node cluster with full iSCSI coverage (maximum configuration) would therefore have the following number of IP addresses: Two IPV4 configuration addresses (one is always associated with the eth0:0 alias for the eth0 interface of the configuration node, and the other goes with eth1:0). One IPV4 service mode fixed address (although many DCHP addresses could also be used) (this is always associated with the eth0:0 alias for the eth0 interface of the configuration node). Two IPV6 configuration address (one is always associated with the eth0:0 alias for the eth0 interface of the configuration node, and the other goes with eth1:0). One IPV6 service mode fixed address (although many DCHP addresses could also be used) (this is always associated with the eth0:0 alias for the eth0 interface of the configuration node).
29
Sixteen IPV4 addresses used for iSCSI access to each node (these are associated with the eth0:1 or eth1:1 alias for the eth0 or eth1 interface on each node) Sixteen IPV6 addresses used for iSCSI access to each node (these are associated with eth0 and eth1 interface on each node.) The configuration of the SVC ports will be shown in great detail in Chapter 7, SVC operations using the CLI on page 337 and Chapter 8, SVC operations using the GUI on page 469.
30
iqn.1991-05.com.microsoft:ITSO_W2008) and in addition an (optional) CHAP secret has to be specified. Adding a VDisk(s) to a host, i.e LUN masking, is done in the same way as for hosts connected via Fibre Channel to the SVC. As iSCSI can be used in networks where data can be accessed illegally, the specification allows for different security methods. This can be, for example, be done via a method such as IPSec which, because it is implemented at the IP level is transparent for higher levels such as iSCSI. Details on securing iSCSI can be found in RFC3723 Securing Block Storage Protocols over IP which is available at: http://tools.ietf.org/html/rfc3723
31
Copy services are implemented between VDisks within a single SVC or multiple SVC clusters respectively. They are therefore independent of the functionalities of the underlying disk subsystems used to provide storage resources to an SVC cluster.
These groups are applied at the secondary in the order which they were executed at the primary. By identifying groups of I/Os which can be applied concurrently at the secondary, the protocol maintains good throughput as the system size grows. The relationship between the two copies is not symmetric. One copy of the data set is considered the primary copy (sometimes also known as the 'source'). This copy provides the reference for normal run-time operation. Updates to this copy are shadowed to a secondary copy (sometimes known as the 'destination' or even target). The secondary copy is not normally referenced for performing I/O. If the primary copy should fail the secondary copy can be enabled for I/O operation. A typical use of this function may involve two sites where the first provides service during normal running and the second is only activated when a failure of the first site is detected. The secondary copy is not accessible for application I/O other than the I/Os that are performed for the remote copy process itself. SVC allows read-only access to the secondary storage when it contains a consistent image. This is only intended to allow boot time operating system discovery to complete without error so that any hosts at the secondary site can be ready to start up the applications with minimum delay if required. For instance, many operating systems need to read LBA 0 to configure a Logical Unit. 'Enabling' the secondary copy for active operation will require some SVC, operating system and possibly application specific work. This needs to be performed as part of the entire failover process. The SVC software at the secondary must be instructed to Stop the Relationship which will have the affect of making the secondary logical unit accessible for normal I/O access. The operating system might need to mount filesystems, or similar work, which can typically only happen when the logical unit is accessible for writes. The application might have some log of work to recover. Note that this property of Remote Copy, the requirement to enable the secondary copy, differentiates it from RAID 1 mirroring. The latter aims to emulate a single, reliable disk regardless of what system is accessing it. Remote Copy retains the property that there are two volumes in existence but suppresses one while the copy is being maintained. The underlying storage at the primary or secondary of a remote copy will normally be RAID storage but may be any storage which can be managed by the SVC. Making use of a secondary copy will involve some conscious policy decision by a user that a failover is required. The application work involved in establishing operation on the secondary copy is substantial. The goal is to make this rapid but not seamless; rapid means very much faster compared to recovering from a backup copy. Most customers will aim to automate this through failover management software. SVC provides SNMP traps and interfaces to enable this automation. IBMs Support for automation is provided by IBM Tivoli Storage Productivity Center for Replication. More info can be found at: http://www-03.ibm.com/systems/storage/software/center/replication/index.html or access the documentation online at the IBM Tivoli Storage Productivity Center information center on: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
2.2.16 FlashCopy
FlashCopy makes a copy of a Source VDisk to a Target VDisk. The original contents of the Target VDisk is lost. After the copy operation has started, the Target VDisk has the contents
Chapter 2. IBM System Storage SAN Volume Controller overview
33
of the Source VDisk as it existed at a single point in time. That is to say that although the copy operation does in fact take infinite time, the resulting data at the target appears as if the copy was made instantaneously. FlashCopy can be operated on multiple Source and Target Virtual Disks. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying Target Virtual Disks from their respective Source Virtual Disks. This allows a consistent copy of data which spans multiple Virtual Disks. SVC also permits multiple Target VDisks to be FlashCopied from each Source VDisk. This will often be used to create images from different points in time for each Source VDisk but it will also be possible to create multiple images form a Source VDisk at a common point in time as described previously. Source and Target VDisks may be space efficient VDisks. Starting with SVC 5.1, Reverse FlashCopy is supported. This enables Target VDisks to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. SVC supports multiple targets and thus multiple rollback points. FlashCopy is sometimes described as an instance of a Time-Zero copy (T0) or Point in Time (PiT) copy technology. Although the FlashCopy operation takes a finite time, this time is several orders of magnitude less than the time which would be required to copy the data using conventional techniques. Most customers will aim to integrate the FlashCopy feature for point in time copies, and quick recovery of their applications and/or databases. IBMs support for this is provided by Tivoli Storage FlashCopy Manager. Info can be found at: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/ A detailed description of copy services as Data Mirroring and FlashCopy can be found in ok <XXXX Ignorer Chapter 7> . Aspects of Data Migration are covered in <XXXX JonRef Chapter 6>.
34
managed as a set (cluster) and present a single point of control to the administrator for configuration and service activity. The actual eight node limit within an SVC cluster is a limitation of the actual product, not an architectural one. Larger clusters are possible without changing the underlying architecture. SVC demonstrated its ability to scale during the recently run Quicksilver project and more details can be found at: http://www-03.ibm.com/press/us/en/pressrelease/24996.wss Based on a fourteen node cluster, coupled with Solid State Drive controllers a data rate of over one million Input/Output (I/O) per second -- with a response time of under one millisecond (ms) was achieved. Although the SAN Volume Controller code is based on a purpose optimized Linux kernel, the clustering feature is not based on Linux clustering code. The cluster software used within SVC, i.e. the event manager cluster framework, is based on the outcome of the COMPASS research project. It is the key element to isolate the SVC application from the underlying hardware nodes, makes the code portable and provides the means to keep the single instances of the SVC code running on the different cluster nodes in sync. Node restarts (during a code upgrade), adding new, or removing old nodes from a cluster or node failures cannot therefore impact SVCs availability. It is key for a cluster, i.e. all active nodes of a cluster, to know that they are members of the cluster. Especially in situations such as the split brain scenario where single nodes lose contact to other nodes and cannot determine if the other nodes can be reached anymore, it is key to have a solid mechanism to decide which nodes form the active cluster. A worst case scenario could be a cluster that splits into two separate clusters. Within an SVC cluster the so called Voting Set and an optional Quorum Disk are responsible for the integrity of the cluster. If nodes are added to a cluster they get added to the voting set, if they are removed they will also quickly be removed from it. Over time, the voting set, and hence the nodes in the cluster can completely change such that the cluster ends up having migrated onto a completely different set of nodes from the set it started on. Within an SVC cluster the quorum is defined as: 1. More than half the nodes in the voting set, or 2. Exactly half the nodes in the voting set and the quorum disk from the voting set, or 3. When there is no quorum disk in the voting set, exactly half of the nodes in the voting set if that half includes the Node which appears first in the voting Set (nodes are entered into the voting set in the first available free slot). These rules guarantee that there is only ever at most one group of nodes able to operate as the cluster, so the cluster never splits into two. The SVC cluster implements a dynamic quorum. This means that following a loss of some nodes, if the cluster can continue operation, then the cluster will adjust the quorum requirement, such that further node failure can be tolerated. The lowest Node Unique ID in a cluster becomes the boss node for the group of nodes and proceeds to determine (from the quorum rules) whether the nodes can operate as the cluster. This node also presents the maximum two cluster IP addresses on one or both of its nodes ethernet ports in order to allow access for cluster management.
Chapter 2. IBM System Storage SAN Volume Controller overview
35
36
Special considerations concerning the placement of the active quorum disk have to be taken into account for stretched cluster, i.e stretched I/O Group, configurations. Details are available at: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311 Note: Running an SVC cluster without a quorum disk could seriously impact your operation. A lack of available quorum disks for storing metadata will prevent any migration operation (including a forced MDisk delete). Mirrored VDisks may be taken offline if there is no quorum disk available. This behavior occurs because synchronization status for mirrored VDisks is recorded on the quorum disk. During normal operation of the cluster, the nodes communicate with each other. If a node is idle for a few seconds, then a heartbeat signal is sent to ensure connectivity with the cluster. Should a node fail for any reason, the workload intended for it is taken over by another node until the failed node has been restarted and re-admitted to the cluster (which happens automatically). In the event that the microcode on a node becomes corrupted, resulting in a failure, the workload is transferred to another node. The code on the failed node is repaired, and the node is re-admitted to the cluster (again, all automatically).
2.3.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level. This can result in anything from one to 10ms of response time (for an enterprise class disk). The new 2145-CF8 nodes combined with SVC 5.1 provide 24 GB memory per node, i.e 48 GB per I/O Group or 192 GB per SVC cluster. SVC provides a flexible cache model and the nodes memory can be used as Read or Write Cache. The size of the Write Cache is limited to a maximum of 12 GB of the nodes memory. Dependent on the current I/O situation on a node, the free part of the memory (maximum 24 GB), may be fully used as Read Cache. Cache is allocated in 4KB pages. A page belongs to one track. A track is the unit of locking and destage granularity in the cache. It is 32 KB in size (eight pages). A track might only be partially populated with valid pages. The SVC coalesces writes up to the 32 KB track size if writes reside in the same tracks prior to destage, for example, if 4 KB is written into a track, another 4 KB is written to another location in the same track. Because of this the blocks written from the SVC to the disk subsystem can have any size between 512 bytes up to 32KB.
37
When data is written by the host, the preferred node within the I/O group saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, i.e copied in the cache of its partner node, for availability reasons. After having a copy of the written data Write data held in cache is not destaged to disk; therefore if only one copy of the data is kept you risk losing data. Write Cache entries without updates during the last two minutes are automatically destaged to disk. If one node of an I/O group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in a operation mode which is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an I/O complete status back to the host. While running in this mode, the performance of the specific I/O Group can be degraded. Starting with SVC Version 4.2.1 (write-) cache partitioning was introduced to the SVC. This feature restricts the max. amount of write cache a single MDG can allocate in a cluster. Table Table 2-2 shows the upper limit of write cache data that any one MDG in a cluster can occupy.
Table 2-2 upper limit of write cache per MDG One MDG 100% Two MDGs 66% Three MDGs 40% Four MDGs 33% > Four MDGs 25%
For in-depth information about SVC cache partitioning, we strongly recommend the following IBM Redpaper publication: IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open An SVC node is able to treat some or all of its physical memory as being non-volatile. Non-volatile means that it contents are preserved across power losses and resets. Besides the bitmaps for Flash Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are the most important items in the Non-volatile memory. The actual amount which can be treated as non-volatile is dependent on the hardware. In the event of a disruption or external power loss, the physical memory is copied to a file in the filesystem on the nodes internal disk drive, such that the contents can be recovered when external power is restored. The UPS which are delivered with each node hardware ensure that there is sufficient internal power to keep a node operational to perform this dump when external power is removed. After dumping the content of the Non-volatile part of the memory to disk the SAN Volume Controller node shuts down.
38
Starting with SVC release 4.3.1, the SVC Console (ICAT) can use the CIM agent that is embedded in the SAN Volume Controller cluster. With release 5.1 of the code using the embedded CIMOM is mandatory. This CIMOM will support the SMI-S Version 1.3 standard.
SSPC
SSPC is based on server HW (IBM x-series based) and a set of preinstalled and optional SW modules. Some of these preinstalled modules provide base functionality only, or are not activated. These modules, or the enhanced functionalities, can be activated by adding separate licences. An overview of the functions: Tivoli Integrated Portal IBM Tivoli Integrated Portal is a standards-based architecture for Web administration. The installation of Tivoli Integrated Portal is required to enable single sign-on for Tivoli Storage Productivity Center. Tivoli Storage Productivity Center now installs Tivoli Integrated Portal along with Tivoli Storage Productivity Center. Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center Basic Edition 4.1.0 is preinstalled on the SSPC server. There are several different commercially available packages of Tivoli Storage Productivity Center that provide additional functionality beyond Tivoli Storage Productivity Center Basic Edition. These packages can be activated by adding the specific licences to the preinstalled Basic Edition: Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for performance. Tivoli Storage Productivity Center for Data allows you to collect and monitor file systems and databases. Tivoli Storage Productivity Center Standard Edition is a bundle that includes all other packages, along with SAN planning tools that make use of information collected from the Tivoli Storage Productivity Center components. Tivoli Storage Productivity Center for Replication The basic functions of Tivoli Storage Productivity Center for Replication provide management of IBM FlashCopy, Metro Mirror and Global Mirror capabilities for the IBM ESS Model 800, IBM DS6000, DS8000, and IBM SAN Volume Controller. This package can be activated by adding the specific licences. SVC GUI (ICAT) SSH Client (PuTTY) Windows 2008 Enterprise Edition
Chapter 2. IBM System Storage SAN Volume Controller overview
39
Several base software packets required for TPC Optional software packages such as anti-virus software or DS3000/4000/5000 Storage Manager may be installed on the SSPC server by the customer. An overview on the SVC management components is given in Figure 2-13. Details are described in Chapter 4, SVC initial configuration on page 101. Details about the SSPC can be found in IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336-03.
40
Forbidden characters are: (single quote),: (colon),% (percent), * (asterisk),, (comma), and (double quote); A user name cannot begin or end with a blank. Passwords for local users do not have any forbidden characters; but passwords cannot begin or end with blanks.
SVC superuser
There is a special local user called the superuser that always exists on every cluster. It cannot be deleted. Its password is set by the user during cluster initialization. The superuser password can be reset from the nodes front panel and this function can be disabled, although this will leave the cluster inaccessible should all users forget their passwords or lose their SSH keys. The superusers password supersedes the cluster administrator password present in previous software releases. To register an SSH key for the superuser in order to provide command line access, the GUI has to be used, usually at the end of the cluster initialization process but can also be added later. The superuser is always a member of user group 0 which has the most privileged role within the SVC.
The access rights for a user belonging to a specific user group are defined by the role that is assigned to the user group. It is the role that defines what a user can do (or cannot do) on an SVC cluster. Table 2-4 on page 42 shows the roles ordered (from the top) by starting with the least privileged Monitor role down to the most privileged SecurityAdmin role.
41
Allowed commands
All svcinfo commands. svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser svcconfig: backup All commands allowed for Monitor role plus: svctask: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, settime All commands allowed for Monitor role plus: svctask: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, chpartnership All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset
Service
CopyOperator
Operator
Administrator
SecurityAdmin
42
43
The authentication service supported by SVC is the Tivoli Embedded Security Services (ESS) server component level 6.2. The ESS server provides the following two key features: 1. ESS isolates the SVC from the actual directory protocol in use This means the SVC communicates only with the ESS to get its authentication information. The type of protocol used to access the central directory or the kind of the directory system used is transparent to SVC. 2. ESS provides a secure token facility that is used to enable single sign-on (SSO) SSO means that users should not have to log in multiple times when using what appears to them to be a single system. It is used within TPC, in other words, when the SVC Console is launched from within TPC, the user will not have to log on to the SVC Console because they have already logged in to TPC. With reference to Figure 2-16 on page 45, the user starts application A with a username and password (1), which is authenticated using the ESS server (2). The server returns a token (3) which is an opaque string that can only be interpreted by the ESS server. The server also supplies the user's groups and an expiry timestamp for the token. The client device (SVC in our case) is responsible for mapping an ESS user group to roles. Application A needs to launch application B. Instead of getting the user to enter a new password to authenticate to application B, A passes B the ESS token (4). Application B passes the ESS token to the ESS server (5), who decodes the token and returns the user's ID and groups to application B (6) along with an expiry timestamp.
44
2: auth( u, p ) 1: login( u, p )
Application A
3: auth_ok( tk, ts, g )
4: launch( tk )
ESS Server
LDAP Server
5: auth( tk )
Application A
6: auth_ok( tk, ts, u, g )
The token expiry timestamp is advice to the ESS client applications A and B about credential caching. The applications are permitted to cache and use a token or username-password combination until the timestamp expires and is returned by the server. So in the our example, Application B could cache the fact that a particular token maps to a particular user ID and groups. This is a performance boost as it saves the latency of querying the ESS server on each interaction between A and B. Once the lifetime of the token has expired, the Application A must query the server again and obtain a new timestamp in order to rejuvenate the token (or alternatively discover that the credentials are now invalid). The ESS server administrator can configure the length of time used to set expiry timestamps. This system is only effective if the ESS server and the applications have synchronized clocks.
45
If none of a users groups match any of the SVC user groups then the user is not permitted to access the cluster. 3. Configure users that do not require SSH access. Any SVC users that are to be used with the remote authentication service and do not require SSH access should be deleted from the system. The superuser cannot be deleted, it is a local user and cannot use the remote authentication service. 4. Configure users that do require SSH access. Any SVC users that are to be used with the remote authentication service and do require SSH access must have their remote setting enabled and the same password set on the cluster and the authentication service. The remote setting instructs SVC to consult the authentication service for group information after the SSH key authentication step to determine the users role. The need to configure the users password on the cluster in addition to the authentication service is due to a limitation in the ESS server software. 5. Configure the system time. For correct operation both the SVC cluster and the system running the ESS server must have the exact same view of the current time. The easiest way to do this is for them both to use the same NTP server. Failure to follow this step could lead to poor interactive performance of the SVC user interface or incorrect user-role assignments. Also TPC 4.1 leverages the TIP infrastructure and its underlying WebSphere Application Server capabilities to make use of an LDAP registry and enable Single Sign On (SSO). More information on how SSO is implemented within TPC 4.1 can be found in Chapter 6 (LDAP authentication support and single sign-on) of the IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG247725 at: http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open
46
non-disruptive upgrade capability may be used to replace older engines with new 2145-CF8 engines. They are 1U high, fit into 19 racks and use the same UPS models as previous models. Integration into existing clusters requires that the cluster runs SVC 5.1 code. The only node that does not support SVC 5.1 code is the 2145-4F2 type node. An upgrade scenario for SVC clusters based, or containing, these 1st generation nodes will be available later this year. Figure 2-17 shows the front-side view of the new SVC 2145-CF8 node:
Bear in mind that some of the new features in the new SVC 5.1 release, such as iSCSI, are software features and are therefore available on all nodes supporting this release.
47
The SVC imposes no limit on the FC optical distance between SAN Volume Controller nodes and host servers. Fibre Channel standards, along with SFP capabilities and cable type, dictate the maximum FC distances supported. If you use longwave SFPs in the SVC node itself, the longest supported FC link between the SVC and switch is 10km. The actual cable length supported with shortwave SFPs is shown in Table 2-5.
Table 2-5 Overview of supported cable length FC-O OM1 (M6) Standard 62.2/125 m 150 m 70 m 21 m OM2 (M5) Standard 50/125 m 300 m 150 m 50 m OM3 (M5E) optimized 50/125 m-300 500 m 380 m 150 m
With respect to the number of ISL hops allowed in a SAN fabric between SVC nodes or cluster, the rules defined in Table 2-6 apply.
Table 2-6 Number of supported ISL hops Between nodes in an I/O Group 0 (connect to same switch) Between nodes in different I/O Groups 1 (recommended:0), (connect to same switch) Between nodes and disk subsystem 1 (recommended: 0, (connect to same switch) Between nodes and host server max. 3
48
The single times shown are not that important, but we draw your attention to the time differences between accessing data located in cache and data located on external disk. As an aid, we have added a second scale to it which gives you an impression of how long it would take to access the data in the hypothetical case that a single CPU cycle would take 1 second. This should help you to get an even better feeling of how important it is for future storage technologies to close or reduce the gap between access times for data stored in cache/memory versus access times for data stored on a external medium. Since magnetic disks were first introduced by IBM in 1956 (RAMAC) they have shown a remarkable performance regarding capacity growth, form factor / size reduction, price decrease ($/GB) and reliability. On the other hand the number of I/Os that a disk can handle and the response time it takes to process a single I/O on it have not increased at the same rate although they have certainly increased it is not at such a quick rate. In real environments we can expect from todays 49
enterprise class FC SAS disk up to 200 IOPS per disk with an average response time, i.e a latency, of approximately 7 ms per I/O. To simplify it, one can state today that rotating disks are getting, and still will, bigger in capacity (several TBs), smaller in form factor/footprint (3.5, 2.5, 1.8) and cheaper ($/GB) but not necessarily faster. The limiting factor is the number of revolutions per minute (rpm) a disk can perform (actually 15000). This factor mainly defines the time that is required to access a specific data block on such a rotating device. There might be smaller improvements in the future but a big step such as doubling the number of revolutions would, if technically possible at all, inevitably be accompanied by a massive increase in power consumption and a price increase for such a device.
A comprehensive and in our opinion a well worth reading overview of the SSD technology can be found in a subset of the well known Spring 2009 SNIA Technical Tutorials. They are available on the SNIA Web site on: http://www.snia.org/education/tutorials/2009/spring/solid When will this all happen? As always, it depends, or to say it with Nils Bohr: Prediction is very difficult especially if it is about the future. But what we can say at this time is: there is evidence that all these will become reality in the second half of this decade and, as you can imagine, it will change the architecture of todays storage infrastructures fundamentally. The IBM SAN Volume Controller and how the first releases of this new technology is integrated into the SVC is described in the following topic.
51
Nodes that contain SSDs can coexist in a single SAN Volume Controller cluster with any other supported nodes. Do not combine nodes that contain SSDs and nodes that do not contain SSDs in a single I/O group. It is acceptable to temporarily mix node types in an I/O Group whilst upgrading SVC node hardware from an older model to the 2145-CF8. Nodes that contain SSDs in a single I/O group must share the same SSD capacities. Quorum functionality is not supported on SSDs within SAN Volume Controller nodes. You must follow the SAN Volume Controller SSD configuration rules for MDisks and MDisk groups: Each SSD is recognized by the cluster as a single MDisk. For each node that contains SSDs, create a single MDisk group that includes only the SSDs that are installed in that node. Terminology: An MDG using SSDs contained within an SVC node will be referenced as SVC SSD storage throughout this book. The configuration rules given in this book apply to SVC SSD storage. Do not confuse this term with SSD storage contained in SAN attached storage controllers, such as the IBM DS8000 or DS5000, are described elsewhere. When you add a new solid-state drive (SSD) to an MDisk group, i.e. move it from unmanaged to managed mode, the SSD is automatically formatted and set to a block size of 512 bytes You must follow these configuration rules for VDisks using storage from SSDs within SAN Volume Controller nodes. VDisks using SVC SSD storage must be created in the I/O group where the SSDs physically reside. VDisks using SVC SSD storage must be mirrored to another MDG to provide fault tolerance. Supported mirroring configurations are: For the highest performance, the two VDisk copies must be created in the two managed disk groups that correspond to the SVC SSD storage in two nodes in the same I/O group. The recommended SSD configuration for highest performance is shown in Figure 2-19 on page 53. For the best utilization of the SSD capacity, the primary vdisk copy must be placed on SVC SSD storage and the secondary copy placed may be placed on Tier 1 storage, such as an IBM DS8000. Under certain failure scenarios, the performance of the VDisk will degrade to the performance of the non-SSD storage. All read I/Os are sent to the primary copy of a mirrored VDisks, therefore reads will experience SSD performance. Writes I/Os are mirrored to both locations, so performance will match the speed of the slowest copy. The recommended SSD configuration for best SDD capacity utilization is shown in Figure 2-20 on page 54. To balance the read workload, evenly split the primary and secondary VDisk copies on each node that contains SSDs. The preferred node of the VDisk must be the same node which contains the SSDs used by the primary VDisk copy Important: For VDisks provisioned out of SVC SSD storage, VDisk Mirroring is mandatory in order to maintain access to the data stored on SSDs if one of the nodes in the I/O Group is being serviced or fails.
52
Bear in mind that VDisks based on SVC SSD storage should always be presented by the I/O Group and, during normal operation, by the node the SSD belongs to. The rules above are designed to direct all host IO to the node containing the relevant SSDs. Existing VDisks can be migrated while online to SVC SSD storage. It therefore might be necessary that the VDisk has first to be moved into the right I/O Group, which requires quiescing I/O to this VDisk during the move. The recommended SSD configuration for highest performance is shown in Figure 2-19.
For a read intensive application, mirrored VDisks can keep their secondary copy on a SAN based Managed Disk Group. This could be, for example, an IBM DS8000 providing Tier 1 storage resources to an SVC cluster. Since all read IOs are sent to the primary copy (which would be set as the SSD) this will give reasonable performance as long as the tier 1 storage can sustain the write I/O rate. Performance will decrease if the primary copy fails. Again, ensure that the node that the primary VDisk copy resides on is also the preferred node for the VDisk. The recommended SSD configuration for best capacity utilization is shown in Figure 2-20 on page 54.
53
Figure 2-20 Recommended SSD configuration for best SDD capacity utilization
Bear the following points in mind when you are using SVC SSD storage: I/O requests to SSDs that are in other nodes are automatically forwarded. However this will introduce additional delays. Try to avoid these configurations by following the configuration rules given previously. Take care before migrating image mode VDisks to SVC SSD storage or deleting a copy of a mirrored VDisk based on SVC SSD storage. Because in all scenarios where your data is stored in one single SSD based MDG only, its not protected against node or disk failures anymore. If you delete or replace nodes containing local SDDs from a cluster remember that the data stored on its SSDs may have to be decommissioned. If you shut down a node that contains SVC SSD storage containing VDisks without mirrors on another node or storage system, you will lose access to any VDisks that are associated with that SVC SSD storage. A force option is provided to prevent an unintended loss of access. The SVC 5.1 code will provide the functionality to upgrade the SSDs firmware and FPGA code. Details and how-tos can be found in the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide Version, SC23-6628.
2.6.2 SVC 5.1 supported hardware list, device driver and firmware levels
With the SVC 5.1 release, as in every release, IBM offers functional enhancements and/or new hardware that can be integrated into existing or new SVC clusters, but also a number of 54
SAN Volume Controller V5.1
interoperability enhancements or new support for servers, SAN switches and disk subsystems. The most current information can be found at: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277
55
New hardware nodes (CF8) New SVC engine based on IBM System x3550M2 server Intel Core i7 2.4 GHz quad-core processor. It provides 24GB of cache (with future growth possibilities) and four 8Gbps FC ports. It provides support for Solid State Drives (up to four per SVC node) enabling scale-out high performance SSD support with SVC. The new nodes may be intermixed in pairs with other engines in SVC clusters. Details are described in 2.4, SVC hardware overview on page 46. 64-bit kernel Model 8F2 and later SVC software kernel upgraded to take advantage of the 64-bit hardware on SVC nodes. Model 4F2 is not supported with SVC 5.1 software, but will be supported with SVC 4.3.x software. 2145-8A4 is an effective replacement for the 4F2 and it doubles the performance of the 4F2. Going to 64-bit mode will improve performance capability. It allows for cache increase (24GB) in the 2145-CF8 and will be used in future SVC releases for cache increases and other expansion options. Solid State Disk support Optional Solid State Drives (SSD) in SVC engines provide new ultra-high-performance storage option. Up to four SSDs per node (140 GB each, larger in future) can be added to a node. This provides up to 540 GB usable SSD capacity per I/O group, or more than 2 TB to an eight node SVC cluster. The SVCs scalable architecture enables customers to take advantage of the throughput capabilities of SSD. SSDs fully integrated into SVC architecture. VDisks may be migrated to/from SSD VDisks without application disruption. FlashCopy may be used for backup or to copy data to SSD VDisks. Details are described in 2.5, Solid State Drives on page 49. iSCSI support Native attachment to SVC for host systems using the iSCSI protocol. This is a software feature. It will therefore be supported also on older SVC nodes that support SVC 5.1. ISCSI is not used for storage attachment, for SVC cluster-to-cluster communication, or for communication between SVC engines in a cluster. This will still remain done via Fibre Channel. Details are described in 2.2.10, iSCSI Overview on page 26. Multiple relationships for synchronous data mirroring (Metro Mirror) Multiple Cluster Mirroring enables Metro Mirror and Global Mirror relationships to exist between a maximum of four SVC clusters. Keep in mind, a VDisk can be in only one MM/GM relationship. The creation of up to 8192 Metro Mirror and/or Global Mirror relationships is supported. The single relationships are individually controllable (create/delete, start/stop). Details are described in Synchronous/Asynchronous Remote Copy on page 32. Enhancements to FlashCopy, support for reverse FlashCopy Enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. Multiple targets and thus multiple rollback points are supported. Details are described in 2.2.16, FlashCopy on page 33. Zero Detection This provides the means to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient Virtual Disk (SEV) using VDisk Mirroring. To 56
SAN Volume Controller V5.1
migrate from a fully-allocated to Space-Efficient VDisk add the target space-efficient copy, wait for synchronization to complete, then remove the source fully-allocated copy. Details are described in 2.2.7, Mirrored VDisk on page 21. User authentication changes SVC 5.1 will support remote authentication and single sign on by using an external service running on the SSPC. The service providing this service will be the Tivoli Embedded Security Services (ESS) installed on the SSPC. Current local authentication methods will still be supported. Details are described in 2.3.5, User Authentication on page 40. RAS Enhancements As a complement to the existing SVC email and SNMP trap facilities, SVC adds Syslog error event logging for those using Syslog already in their configurations. This enables optional transmission over syslog interface to a remote syslog daemon when parsing the Error Event Log. The format and content of messages sent to a Syslog server are identical to the ones transmitted in a SNMP trap message.
57
http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&bra ndind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1 SVC online documentation can be found at: http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp IBM Redbooks on SVC at: http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Cluster
A group of 2145 nodes that present a single configuration and service interface to the user.
Consistency Group
A group of Virtual Disks that have copy relationships that need to be managed as a single entity
Copied
Copied is a FlashCopy state that indicates that a copy has been triggered at some time after the copy relationship was created. The copy process is complete and the Target Disk has no further dependence on the Source Disk. The time of the last trigger event is normally displayed with this status.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide configuration and service functions over the network interface. This node is termed the configuration node. This configuration node manages a cache of the configuration information that describes the cluster configuration and provides a focal point for configuration commands. If the configuration node fails, another node in the cluster will assume the role.
Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN provides all the connectivity of the redundant SAN, but without the 100% redundancy. An SVC node is typically connected to a redundant SAN made out of two counterpart SANs. A counterpart SAN is often called a SAN fabric.
Error Code
A value used to identify an error condition to a user. This value might map to one or more Error IDs or to values presented on the service panel. This value is used to report error conditions to IBM and to provide an entry point into the service guide.
Error ID
A value used to identify a unique error condition detected by the 2145 cluster. An Error ID is used internally in the cluster to identify the error. 58
SAN Volume Controller V5.1
Excluded
A status condition that describes a Managed Disk that the 2145 cluster has decided is no longer sufficiently reliable to be managed by the cluster. The user must issue a command to include the Managed Disk into the cluster managed storage.
Extent
A fixed size unit of data that is used to manage the mapping of data between Managed Disks and Virtual Disks.
FRU
Field Replaceable Unit. The individual parts which are held as spares by the service organization.
Grain
A grain is the unit of data represented by a single bit in a FlashCopy bitmap (64 KB / 256 KB) in the SAN Volume Controller. It is also the unit to extend the real size of a space-efficient VDisk (32,64,128 or 256 KB).HBAHost Bus Adapter.In the context of Lodestone, this is an interface card which connects between a host bus, such as PCI and the SAN.
HBA
Host Bus Adapter.In the context of Lodestone, this is an interface card which connects between a host bus, such as PCI and the SAN.
Host ID
A numeric identifier assigned to a group of Host FC ports or iSCSI Host names for the purposes of LUN Mapping. For each Host ID there is a separate mapping of SCSI Ids to VDisks. It is intended that there be a one to one relationship between hosts and host IDs although this cannot be policed.
59
Image Mod
A configuration mode similar to Router mode but with the addition of cache/copy functions. SCSI commands are not forwarded directly to the Managed Disk.
I/O Group
A collection of VDisk and node relationships, i.e an SVC node pair that presents a common interface to host systems.Each SAN Volume Controller node is associated with exactly one I/O group. The two nodes in the I/O group provide access to the VDisks in the I/O group.
IISL hop
An interswitch link (ISL) is a connection between two switches, and is counted as an ISL hop. The number of hops is always counted on the shortest route between two N-ports (device connections). In an SVC environment, the number of ISL hops is counted on the shortest route between the pair of nodes farthest apart. It measures distance only in terms of ISLs in the fabric.
Local fabric
Since the SVC supports remote copy, there might be significant distances between the components in the local cluster and those in the remote cluster. The local fabric is composed of those SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the local cluster together.
LU and LUN
Formally defined by the SCSI standards as Logical Unit Number. Used as an abbreviation for an entity which exhibits disk like behavior. For example a VDisk or MDisk.
60
Node
A single processing unit, which provides virtualization, cache, and copy services for the SAN. SVC nodes are deployed in pairs called I/O groups. One node in the cluster is designated the configuration node.
Oversubscription
Oversubscription is the ratio of the sum of the traffic on the initiator N-port connection, or connections to the traffic on the most heavily loaded ISL(s) where more than one is used between these switches. This assumes a symmetrical network, and a specific workload applied evenly from all initiators and directed evenly to all targets. A symmetrical network means that all the initiators are connected at the same level, and all the controllers are connected at the same level.
Prepare
A configuration command that is used to cause cache data to be flushed in preparation for a copy trigger operation.
RAS
Reliability Availability and Serviceability.
RAID
Redundant Array of Independent Disks.
Redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so no matter what component fails, data traffic will continue. Connectivity between the devices within the SAN is maintained, albeit possibly with degraded performance, when an error has occurred. A redundant SAN design is normally achieved by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so even if one counterpart SAN is destroyed, the other counterpart SAN keeps functioning.
Remote fabric
Since the SVC supports remote copy, there might be significant distances between the components in the local cluster and those in the remote cluster. The remote fabric is composed of those SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the remote cluster together.
SAN
Storage Area Network.
SCSI
Small Computer Systems Interface.
SLP
The Service Location Protocol (SLP) is a service discovery protocol that allows computers and other devices to find services in a local area network without prior configuration. It has been defined in RFC 2608.
Chapter 2. IBM System Storage SAN Volume Controller overview
61
SSPC
IBM System Storage Productivity Center (SSPC) replaces the master console for new installations of SAN Volume Controller Version 4.3.0. For SSPC planning, installation, and configuration information, see the following Web site: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
62
Chapter 3.
63
64
10.Define the managed disk groups (MDGs). This depends on the disk subsystem in place and the data migration needs. 11.Create and re-partition the VDisks between the different I/O groups and the different MDGs in such a way as to optimize the I/O load between the hosts and the SVC. This can be an equal re-partition of all the VDisks between the different nodes, or a re-partition that takes into account the expected load from the different hosts. 12.Plan for the physical location of the equipment in the rack. 13.Determine the IP addresses for the SVC Cluster, the SVC service IP address, and SSPC (SVC console). 14. Define the number of FlashCopy required per hosts. 15.Define the cluster configuration backup and business data backup. SVC planning can be categorized into two different types: Physical planning Logical planning
65
2145 UPS-1U
The 2145 uninterruptible power supply-1U (2145 UPS-1U) is one EIA unit high and is shipped, and can only operate, on the following node types: SAN Volume Controller 2145-CF8 SAN Volume Controller 2145-8A4 SAN Volume Controller 2145-8G4 SAN Volume Controller 2145-8F2 SAN Volume Controller 2145-8F4 It was also shipped and will operate with the SAN Volume Controller 2145-4F2. When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 240 V, single phase. Note: The 2145 UPS-1U has an integrated circuit breaker and does not require external protection.
2145 UPS
The 2145 uninterruptible power supply (2145 UPS) is two EIA units high and was only shipped with the SAN Volume Controller 2145-4F2 prior to SVC V2.1. Be aware of the following considerations when configuring the 2145 uninterruptible power supply (2145 UPS): Each 2145 UPS must be connected to a separate branch circuit. A UL-listed 15 A circuit breaker must be installed in each branch circuit that supplies power to the 2145 UPS.
66
The voltage that is supplied to the 2145 UPS must be single phase 200 240 V with a supplied frequency at 50 or 60 Hz.
Heat output
The maximum heat output parameters are different depending by which SVC node models are connected. In order to get updated heat output values (if any) refer to the IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551.
In SVC versions prior to SVC V2.1, the Powerware 5125 UPS was shipped with the SVC; in SVC V4.2, Powerware 5115 UPS is shipped with the SVC. You can upgrade an existing SVC cluster to V4.3.1.x and still use the UPS Powerware 5125 that was delivered with the SVC prior to V2.1. Each SVC node of an I/O group must be connected to a different UPS. Each UPS shipped with SVC V3.1, V4.1, V4.2, and V4.3 supports one node only, but each UPS in earlier versions of SVC supports up to two SVC nodes (in distinct I/O groups). Each UPS pair that supports a pair of nodes must be connected to a different power domain (if possible) to reduce the chances of input power loss. The UPSs, for safety reasons should be installed in the lowest positions in the rack. If necessary, move lighter units towards the top of the rack to make way for them. The power and serial connection from a node must be connected to the same UPS, otherwise the node will not boot.
67
The 5115 and 5125 UPS can be mixed with UPSs that were supplied with earlier SVC versions, but the UPS rules above have to be followed, and SVC nodes in the same I/O group must be attached to the same type of UPSs, though not the same UPS. 2145-CF8, 2145-8A4, 2145-8G4, 2145-8F2, and 2145-8F4 hardware models must be connected to a 5115 UPS. They will not boot with a 5125 UPS. Important: Do not share the SVC UPS with any other devices. Figure 3-3 shows ports for the 2145-CF8.
Figure 3-4 on page 69 shows a power cabling example for the 2145-CF8.
68
There are some guidelines to follow for FC cable connections. Occasionally the introduction of new SVC hardware model means that there are internal changes. One example is the WWPN mapping in the port mapping. The 2145-8G4 and 2145-CF8 have the same mapping. Figure 3-5 on page 70 shows the WWPN mapping.
69
70
We suggest that you place the racks in different rooms, if possible, in order to obtain protection against critical events (fire, water, power loss, and so on) that may effect one room only. Bear in mind the maximum distances supported between the nodes in one I/O group (100 meters). This distance can be extended by submitting a formal SCORE request to increase the limit by following the rules that will be specified in any SCORE approval.
71
SAN zoning and SAN connections iSCSI IP addressing plan Back end Storage subsystem configuration SVC cluster configuration Managed disk group configuration Virtual Disk configuration Host mapping (LUN masking) Advanced copy functions SAN boot support Data migration from non-virtualized storage subsystems SVC configuration back-up procedure
Each node in an SVC cluster needs to have at least one ethernet connection IBM supports the option of having multiple console access, using the traditional SVC HMC or the SSPC console. Multiple master consoles or SSPC consoles can access a single cluster, but when multiple master consoles access one cluster, you cannot concurrently perform configuration and service tasks. The master console can be supplied on either pre-installed hardware, or just software supplied to and subsequently installed by the user. With SVC 5.1 the cluster configuration node may now be accessed on both Ethernet ports, and this means the cluster may have two IPv4 and two IPv6 addresses that are used for configuration purposes. Figure 3-7 on page 73 shows the IP configuration possibilities.
72
The cluster may therefore be managed by SSPCs on separate networks this provides redundancy in the event of a failure of one of these networks. Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each Ethernet port on every node these IP addresses are independent of the cluster configuration ip addresses. The CLI commands for managing the cluster IP addresses have therefore been moved from svctask chcluster to svctask chclusterip in SVC 5.1 and new commands introduced to manage the iSCSI IP addresses. When connecting to the SVC with SSH, choose one of the available IP addresses to connect to. There is no automatic failover capability, so if one network is down, use the other IP address. Customers may be able to use intelligence in Domain Name Servers (DNS) to provide some failover. When using the GUI customers can add the cluster to the SVC Console multiple times (once per IP address). Failover is achieved by using the functional IP address when launching the SVC Console interface.
73
74
Figure 3-9 shows an example of SVC, host and Storage Subsystem connections.
75
The following guidelines must be also applied: Hosts are not permitted to operate on the disk subsystem LUNs directly if the LUNs are assigned to the SVC. All data transfer happens through the SVC nodes. Under some circumstances, a disk subsystem can present LUNs to both the SVC (as managed disks, which it then virtualizes to hosts) and to other hosts in the SAN. Mixed speeds are permitted within the fabric, but not for intracluster communication. You can use lower speeds to extend the distance. Uniform SVC port speed for 2145-4F2 and 2145-8F2 nodes: The optical fiber connections between Fibre Channel switches and all 2145-4F2 or 2145-8F2 SVC nodes in a cluster must run at one speed, either 1 Gbps or 2 Gbps. 2145-4F2 or 2145-8F2 nodes with different speeds running on the node to switch connections in a single cluster is an unsupported configuration (and is impossible to configure anyway). This rule does not apply to 2145-8F4, 2145-8G4, 2145-8A4 and 2145-CF8 nodes because the Fibre Channel ports on these nodes auto-negotiate their speed independently of one another and can run at 2 Gbps, 4 Gbps or 8 Gbps. Each of the local or remote fabrics should not contain more than three ISL hops within each fabric. An operation with more ISLs is unsupported. When a local and a remote fabric are connected together for remote copy purposes, there should only be one ISL hop between the two SVC clusters. This means that some ISLs can be used in a cascaded switch link between local and remote clusters, provided that the local and remote cluster internal ISL count is less than three. This gives a maximum of seven ISL hops in an SVC environment with both local and remote fabrics. The switch configuration in an SVC fabric must comply with the switch manufacturers configuration rules. This can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside the switch manufacturers rules is not supported. The SAN contains only supported switches, operation with other switches is unsupported. Host HBAs in dissimilar hosts or dissimilar HBAs in the same host need to be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in separate zones. Here, dissimilar means that the hosts are running different operating systems or use different hardware platforms. Therefore, different levels of the same operating system are regarded as similar. This is a SAN interoperability issue rather than an SVC requirement. We recommend that the host zones contain only one initiator (HBA) each, and as many SVC node ports as you need, depending on the high availability and performance you want to have from your configuration. Note: In SVC Version 3.1 and later, the command svcinfo lsfabric generates a report that displays the connectivity between nodes and other controllers and hosts. This is particularly helpful in diagnosing SAN problems.
Zoning examples
Figure on page 77 shows an SVC cluster zoning example.
76
77
78
The equivalent configuration can be setup with just IPv6 addresses. Figure 3-14 on page 79 shows the use of IPv4 management and iSCSI addresses in two different subnets.
79
Figure 3-16 on page 80 shows the use of a redundant network and a third subnet for management.
Figure 3-17 shows the use of a redundant network for both iSCSI data and management.
80
Be aware that: All the examples are valid using IPv4 and IPv6 addresses It is valid to use IPv4 addresses on one port and IPv6 addresses on the other It is valid to have different subnet configurations for IPv4 and IPv6 addresses
different set of ports on the same controller becomes degraded. This can occur if inappropriate zoning was applied to the fabric. It can also occur if inappropriate LUN masking is used. This has important implications for a disk subsystem, such as DS3000, DS4000 or DS5000 which imposes exclusivity rules on which HBA world wide names (WWNs) a storage partition can be mapped to. In general, configure disk subsystems as you would without the SVC, but the following specific guidelines are suggested: Disk drives: Be careful with large disk drives and that you do not end up with too few spindles to handle load RAID-5 is suggested, but RAID-10 is viable and useful Array sizes: 8+P or 4+P is recommended for the DS4K/5K family if possible Use the DS4K segment size of 128KB or larger to help sequential performance Avoid SATA disk unless running SVC 4.2.1.x or later Upgrade to EXP810 drawers if possible Create LUN sizes equal to RAID array/rank if it does not exceed 2TB Create a minimum of one LUN per Fibre Channel port on a disk controller zoned with the SVC When adding more disks to a subsystem consider adding the new MDisks to existing MDGs versus creating additional small MDGs Use a Perl script to re-stripe VDisk extents evenly across all MDisks in MDG. Go to: http://www.ibm.com/alphaworks and search using svctools Maximum of 64 WWNNs: EMC DMX/SYMM, All HDS and SUN/HP HDS clones use one WWNN per port; each appears as a separate controller to the SVC Upgrade to SVC 4.2.1 or later so you can map LUNs through up to 16 FC ports; this results in 16 WWNNs/WWPNs used out of the maximum of 64 IBM, EMC Clariion, HP, use one WWNN per subsystem; each appears as a single controller with multiple ports/WWPNs, maximum of 16 ports/WWPNs per WWNN using one out of the maximum of 64 DS8K using four or eight 4 port HA cards: Use port 1 and 3 or 2 and 4 on each card Provides 8 or 16 ports for SVC use Use 8 ports minimum up to 40 ranks Use 16 ports, the maximum, for 40+ ranks Upgrade to SVC 4.2.1.9 or later to drive more workload to DS8K Increased queue depth for DS4K/5K DS6K, DS8K, EMC DMX DS4K/5K EMC Clariion/CX: Both have the preferred controller architecture and SVC honors this configuration Use minimum of 4 and preferably 8 ports or more up to maximum of 16 so more ports equate to more concurrent I/O driven by the SVC
82
Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross connecting ports to both fabrics from both controllers. The latter is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail. Upgrade to SVC 4.3.1 or later for an SVC queue depth change for CX models because it drives more I/O per port per MDisk DS3400: Use a minimum of 4 ports Upgrade to SVC 4.3.x or later for better resiliency if the DS3400 controllers reset XIV requirements and/or restrictions SVC cluster must be running version 4.3.0.1 or later to support the XIV Use of some XIV functions on LUNs presented to the SVC is not supported Cannot do snaps, thin provisioning, synchronous replication, LUN expansion on XIV MDisks Maximum of 511 LUNs from one XIV system can be mapped to a SVC cluster Full 15 Module XIV Recommendations 79TB Usable Use two interface host ports from each of the 6 interface modules Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC node ports Create 48 LUNs of equal size each a multiple of 17 GB and you will get 1632 GB approximately if using the entire full frame XIV with the SVC Map LUNs to the SVC as 48 MDisks and add all of them to the one XIV MDG so SVC will drive the I/O to 4 MDisks/LUNs for each of the 12 XIV Fibre Channel ports. This provides a good queue depth on the SVC to drive XIV adequately Six Module XIV Recommendations 27 TB Usable: Use 2 interface host ports from each of the 2 active interface modules Use ports 1 and 3 from interface modules 4 and 5 (interface module 6 is inactive) and zone these 4 ports with all SVC node ports Create 16 LUNs of equal size each a multiple of 17GB and you will get 1632GB approximately if using entire XIV with SVC Map LUNs to SVC as 16 MDisks and add all of them to the one XIV MDG so SVC will drive I/O to 4 MDisks/LUNs per each of the 4 XIV fibre ports. This provides good queue depth on SVC to drive XIV adequately Nine Module XIV Recommendations 43TB Usable Use two interface host ports from each of the 4 active interface modules Use ports 1 and 3 from interface modules 4,5,7,8 (Interface modules 6 and 9 are inactive) and zone these 8 ports with all SVC node ports Create 26 LUNs of equal size each a multiple of 17GB and you will get 1632GB approximately if using entire XIV with SVC Map LUNs to SVC as 26 MDisks and add all of them to the one XIV MDG so SVC will drive I/O to 3 MDisks/LUNs on each of 6 ports and 4 on the other 2 XIV fibre ports. This provides good queue depth on SVC to drive XIV adequately Configuring XIV host connectivity for the SVC cluster Create one host definition on XIV and include all SVC node WWPNs Can create clustered host definitions - 1 per I/O group - but the preceding is easier
Chapter 3. Planning and configuration
83
84
85
Maximum cluster capacity is related to the extent size 16MB extent = 64TB and doubles for each increment in extent size e.g. 32MB = 128TB, we strongly recommend minimum 128/256MB. SPC benchmarks used 256MB extent. Pick the extent size and use for all MDGs Cannot migrate VDisks between MDGs with different extent sizes MDG reliability, availability, and serviceability (RAS) considerations: It may make sense to create multiple MDGs if you ensure a host only gets its VDisks built from one of the MDGs. If the MDG goes offline then it impacts only a subset of all the hosts using the SVC but this could cause a high number of MDG, close to the SVC limits. If you are not going to isolate hosts to MDGs then create one large MDG. This assumes the physical disks are all the same size and speed and same RAID level. MDG goes offline if a MDisk is unavailable even if it has no data on it. Do not put MDisks into an MDG until needed. Create at least one separate MDG for all the image mode virtual disks. Make sure that the LUNs given to the SVC have any host persistent reserves removed. MDG performance considerations: It may make sense to create multiple MDGs if attempting to isolate workloads to different disk spindles. MDGs with too few MDisks cause an MDisks overload, so it is better to have more spindle counts in a MDG to meet workload requirements MDG and SVC cache relationship SVC 4.2.1 first introduced cache partitioning to the SVC code base. The decision was made to provide flexible partitioning, rather than hard coding a specific number of partitions. This flexibility is provided on a Managed Disk Group (MDG) boundary. That is, the cache will automatically partition the available resources on a per Managed Disk Group basis. Most users create a single Managed Disk Group from the LUNs provided by a single disk controller, or a subset of a controller/collection of the same controllers, based on the characteristics of the LUNs themselves. For example, RAID-5 vs. RAID-10, 10K RPM vs. 15K RPM, and so on. The overall strategy is provided to protect from individual controller overloading or faults. If many controllers (or in this case Managed Disk Groups) are overloaded then the overreached may still suffer. Table 3-2 shows limit of the write cache data
86
Table 3-2 Limit of the cache data Number of MDG 1 2 3 4 5 or more Upper Limit 100% 66% 40% 30% 25%
You can think of the rule as being that no single partition can occupy more than its upper limit of cache capacity with write data.These are upper limits, and are the point at which the SVC cache will start to limit incoming I/O rates for Virtual Disks (VDisks) created from the Managed Disk Group. If a particular partition reaches this upper limit, the net result is the same as a global cache resource that is full. That is, the host writes will be serviced on a one-out-one-in basis as the cache de-stages writes to the back-end disks. However, only writes targeted at the full partition are limited, all I/O destined for other (non-limited) Managed Disk Groups will continue as normal. Read I/O requests for the limited partition will also continue as normal - however since the SVC is destaging write data at a rate that is obviously greater than the controller can actually sustain (otherwise the partition would not have reached the upper limit) - reads are likely to be serviced equally slowly.
87
Even if you have eight paths for each virtual disk, all I/O traffic flows only towards one node (the preferred node). Therefore, only four paths are really used by SDD. The other four are used only in case of a failure of the preferred node or when concurrent code upgrade (CCU) is running. Creating image mode virtual disks Use image mode virtual disks when a managed disk already has data on it, from a non-virtualized disk subsystem. When an image mode virtual disk is created, it directly corresponds to the managed disk from which it is created. Therefore, virtual disk LBA x = managed disk LBA x. The capacity of image mode VDisks defaults to the capacity of the supplied MDisk. When you create an image mode disk, the managed disk must have a mode of unmanaged and therefore does not belong to any MDG. A capacity of 0 is not allowed. Image mode virtual disks can be created in sizes with a minimum granularity of 512 bytes, and must be at least one block (512 bytes) in size. Creating managed mode virtual disks with sequential or striped policy When creating a managed mode virtual disk with sequential or striped policy, you must use a number of managed disk containing free extents that are free and of a size that is equal or greater than the size of the virtual disk you want to create. It may be the case that there are sufficient extents available on the managed disk, but that there is no contiguous block large enough to satisfy the request. Space-efficient virtual disk considerations While creating the space-efficient volume, it is necessary to understand the utilization patterns by the applications or group users accessing this volume. Items such as the actual size of the data, rate of creation of new data, modifying or deletion of existing data, and so on all need to be taken into consideration. There are two operating modes for Space-Efficient VDisks, Auto-Expand VDisks that allocate storage from a managed disk group on demand with minimal user intervention required, but a misbehaving application can cause a VDisk to expand until it has consumed all the storage in a managed disk group, and, non Auto-Expand VDisks where a fixed amount of storage is assigned. In this case user must monitor VDisk and assign additional capacity if/when required. a misbehaving application can only cause the VDisk it is using to fill up Depending on the initial size for the real capacity, the grain size and warning level can be set. If a disk goes offline, either through a lack of available physical storage on auto-expand, or because a disk marked as non-expand has not been expanded, then there is a danger of data being left in the cache until some storage is made available. This is not a data integrity or data loss issue, but you should not be relying on the SVC cache as a backup storage mechanism. Recommendations We highly recommend to keeping a warning level on the used capacity such that it provides adequate time for provision of more physical storage. Warnings should not be ignored by an administrator. Use the auto-expand feature of Space-efficient VDisks. The grain size allocation unit for the real capacity in the VDisk can be set as 32 KB, 64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it results in a larger directory map, which may reduce performance.
88
Space-efficient VDisks require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write a Space-Efficient VDisk will require approximately 1 directory I/O for every user I/O so performance could be up to 50% less of a normal VDisk The directory is 2-way write-back cached (just like the SVC fastwrite cache) so some applications will perform better Space-Efficient VDisks require more CPU processing so the performance per I/O group will be lower Starting with SVC 5.1 we have Space-Efficient VDisks - zero detect. This feature enables customers to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient Virtual Disk (SEV) using VDisk Mirroring. VDisk Mirroring. If you are planning to use the VDisk mirroring option the following guidelines must be applied: Create or identify two different MDGs where to allocate space for your mirrored VDisk If it is possible use MDG with MDisks that share the same characteristics, otherwise the VDisk performance may be effected by the lower performance MDisk.
89
As part of a security policy, to limit the set of WWPNs that are able to obtain access to any VDisks through a given SVC port. As part of a scheme to limit the number of logins with mapped VDisks visible to a host multi-pathing driver (like SDD) and thus limit the number of host objects configured without resorting to switch zoning. The port mask is an optional parameter of the svctask mkhost and chhost commands. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111 (all ports enabled). The SVC supports connection to the Cisco MDS family and Brocade family. See the following Web site for the latest support information: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
90
Figure 3-19 contains two redundant fabrics. Part of each fabric exists at the local and remote cluster. There is no direct connection between the two fabrics. Technologies for extending the distance between two SVC clusters can be broadly divided into two categories: Fibre Channel Extenders SAN multiprotocol routers
91
Due to the more complex interactions involved, IBM explicitly tests products of this class for interoperability with the SVC. The current list of supported SAN routers can be found in the supported hardware list on the SVC support web site at:
http://www.ibm.com/storage/support/2145
IBM has tested a number of Fibre Channel extenders and SAN router technologies with the SVC. These must be planned, installed and tested such that the following requirements are met: For SVC 4.1.0.x, the round-trip latency between sites must not exceed 68ms (34ms one way) for fibre channel extenders, or 20ms (10ms one-way) for SAN routers. For SVC 4.1.1.x and later, the round-trip latency between sites must not exceed 80ms (40ms one-way). For Global Mirror this would allow a distance of between primary and secondary sites of up to 8000 km using a planning assumption of 100km per 1ms of round trip link latency. The latency of long distance links depends upon the technology used to implement them. A point to point dark fiber-based link will typically provide a round trip latency of 1ms per 100km or better, other technologies will provide longer round trip latencies and this will affect the maximum supported distance. The configuration must be tested with the expected peak workloads. When Metro Mirror or Global Mirror is being used, a certain amount of bandwidth will be required for SVC inter-cluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clusters. Figure 3-20 shows the amount of heartbeat traffic, in megabits per second, generated by different sizes of cluster.
These numbers represent the total traffic between the two clusters, when no I/O is taking place to mirrored VDisks. Half of the data is sent by one cluster, and half by the other cluster. The traffic will be divided evenly over all available inter-cluster links; therefore, if you have two redundant links, half of this traffic will be sent over each link, during fault free operation. The bandwidth between sites must be at the very least sized to meet the peak workload requirements while maintaining the maximum latency specified above. The peak workload requirement must be evaluated by considering the average write workload over a period of one minute or less plus the required synchronization copy bandwidth. With no synchronization copies active and no write IO disks in Metro Mirror or Global Mirror relationships, the SVC protocols will operate with the bandwidth indicated in the table above, but the true bandwidth required for the link can only be determined by considering the peak write bandwidth to Virtual Disks participating in Metro Mirror or Global Mirror relationships and adding to it the peak synchronization copy bandwidth. if the link between the sites is configured with redundancy so that it can tolerate single failures then the link must be sized so that the bandwidth and latency statements continue to hold true even during such single failure conditions. 92
SAN Volume Controller V5.1
The configuration is tested to simulate failure of the primary site (to test the recovery capabilities and procedures) including eventual failback to the primary site from the secondary. The configuration must be tested to confirm that any failover mechanisms in the inter-cluster links interoperate satisfactorily with SVC. The Fibre Channel extender should be treated as a normal link. The bandwidth and latency measurements must be made by, or on behalf of the client, and are not part of the standard installation of the SVC by IBM. IBM recommends that these measurements are made during installation and that records are kept. Testing should be repeated following any significant changes to the equipment providing the inter-cluster link.
93
Global Mirror VDisks should have their preferred nodes evenly distributed between the nodes of the clusters. Each VDisk within an I/O group has a preferred node property that can be used to balance the I/O load between nodes in that group. This property is also used by Global Mirror to route I/O between clusters. Figure 3-21 shows the correct relationship between VDisks in a Metro Mirror or Global Mirror solution.
The capabilities of the storage controllers at the secondary cluster must be provisioned to allow for the peak application workload to the Global Mirror VDisks, plus the customer-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary cluster can be limited by performance of the back-end storage controllers at the secondary cluster. To maximize the amount of I/O that applications may make to Global Mirror VDisks. We do not recommend using SATA for Metro Mirror or Global Mirror secondary VDisks without complete review. Be careful using slower disk subsystem for the secondary VDisks for high performance primary VDisks, SVC cache may not be able to buffer all the writes and flushing cache writes to SATA may slow I/O at production site. Global Mirror VDisks at the secondary cluster should be in dedicated MDisk groups (which contain no non-Global Mirror VDisks). Storage controllers should be configured to support the Global Mirror workload that is required of them. This might be achieved by dedicating storage controllers to only Global Mirror VDisks or configuring the controller to guarantee sufficient quality of service for the disks being used by Global Mirror or ensuring that physical disks are not shared between Global Mirror VDisks and other I/O (for example by not splitting an individual RAID array). MDisks within a Global Mirror MDisk group should be similar in their characteristics (for example RAID level, physical disk count, disk speed). This is true of all MDisk groups, but is particularly important to maintain performance when using Global Mirror. When a consistent relationship is stopped, for example, by a persistent I/O error on the inter-cluster link, the relationship enters the consistent_stopped state. I/O at the primary site continues but the updates are not mirrored to the secondary site. Restarting the relationship will begin the process of synchronizing new data to the secondary disk. While this is in progress, the relationship will be in the inconsistent_copying state. This means that the Global Mirror secondary VDisk will not be in a usable state until the copy has completed and the relationship has returned to a consistent state. Therefore, it is highly advisable to create a FlashCopy of the secondary VDisk before restarting the relationship. Once started, the FlashCopy will provide a consistent copy of the data, even while the
94
Global Mirror relationship is copying. If the Global Mirror relationship does not reach the synchronized state (if, for example, the inter-cluster link experiences further persistent I/O errors), then the FlashCopy target can be used at the secondary site for disaster recovery purposes. If you are planning to use an FCIP inter-cluster link it is very important to design and size the pipe correctly. Example 3-2 on page 95 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example
Amount of write data within 24 hours times 4 to allow for peaks Translate into MB/s to determine WAN link needed Example: 250 GB a day 250GB * 4 = 1TB 24hours * 3600secs/hr. = 86400secs 1,000,000,000,000/ 86400 = approximately. 12MB/s Which means OC3 or higher is needed(155 Mbps or higher) If compression is available on routers or WAN communication devices then smaller pipelines may be adequate. Note that workload is probably not evenly spread across 24 hours. If there are extended periods of high data change rates then you may want to consider suspending Global Mirror during that time frame. If the network bandwidth is too small to handle the traffic then application write I/O response times may be elongated. For SVC, Global Mirror must support short term Peak Write bandwidth requirements. Keep in mind that SVC Global Mirror is much more sensitive to lack of bandwidth than the DS8000. You will need to consider initial sync and re-sync workload as well. The Global Mirror partnerships background copy rate must be set to a value appropriate to the link and secondary back-end storage. Keep in mind, the more bandwidth you give to the sync and re-sync operation, the less workload can be delivered by the SVC for the regular data traffic. Metro Mirror or Global Mirror background copy rate is pre-defined and the per VDisk limit is 25 MBps, and the maximum per I/O Group is roughly 250 MBps. Be careful using when Space-efficient secondary VDisks at the DR site, because a Space-efficient VDisk could have performance of up to 50% less of a normal VDisk, and could impact the performance of the VDisks at the primary site. Do not propose Global Mirror if the data change rate will exceed communication bandwidth or round trip latency exceeds 80-120ms. Greater then 80ms requires SCORE/RPQ submission.
95
configuration back-up applying the configuration back-up command and you can find this described for the CLI and the GUI in Chapter 7, SVC operations using the CLI on page 337, and Chapter 8, SVC operations using the GUI on page 469.
3.4.1 SAN
The SVC now has different models: 2145-4F2, 2145-8F2, 2145-8F4, 2145-8G4, 2145-8A4 and 2145-CF8. All of them can connect to 2 Gbps, 4 Gbps or 8 Gbps switches. From a performance point of view, it is better to connect the SVC to 8 Gbps switches. Correct zoning on the SAN switch will bring security and performance together. We recommend to implement a dual HBA approach at the host to access the SVC,
97
Creating one LUN per array will help in a sequential workload environment. In most cases, the SVC will be able to improve the performance, especially on middle-low end disk subsystems or older disk subsystems with slow controllers or un-cached disk systems. This improvement happens because: The SVC has the capability to stripe across disk arrays and it can do so across the entire set of supported physical disk resources. The SVC has a 4 GB or 8 GB or 24 GB cache in the latest 2145-CF8 model and it has an advanced caching mechanism. The SVCs large cache and advanced cache management algorithms also allow it to improve upon the performance of many types of underlying disk technologies. SVCs capability to manage, in the background, the destaging operations incurred by writes (while still supporting full data integrity), has the potential to be particularly important in achieving good database performance. Depending upon the size, age, and technology level of the disk storage system, the total cache available in the SVC may be larger, smaller, or about the same as that associated with the disk storage. Since hits can occur in either the upper (SVC) or the lower (disk controller) level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if the storage control level of cache has the greater capacity, hits to this cache should be expected to occur, in addition to hits in the SVC cache. Also, regardless of their relative capacities, both levels of cache will tend to play an important role in allowing sequentially organized data to flow smoothly through the system. SVC cannot increase the throughput potential of the underlying disks in all cases. Its ability to do so depends upon both the underlying storage technology, as well as the degree to which the workload exhibits hot spots or sensitivity to cache size or cache algorithms. IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, shows the SVCs cache partitioning capability: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
3.4.3 SVC
The SVC cluster is scalable up to eight nodes, and the performance is almost linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back end without overloading a single disk or array. Assuming hat there are no bottlenecks in the SAN or on the disk subsystem, keep in mind that some specific guidelines must be followed when you are creating: Managed Disk Group Virtual Disks Connecting or configuring hosts the must receive disk space from a SVC cluster. More detailed information about performance and best practices for the SVC can be found at: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
98
99
100
Chapter 4.
101
You still have full management of the SVC independent of the method you choose. SSPC is supplied by default when you purchase your SVC cluster. If you already have a previously installed SVC cluster in your environment it is possible you are using the SVC console (HMC). You can still use it together with SSPC with the proviso that you can only login to your SVC from one of them a time. If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are using the SVC console or SSPC as the SVC CLI is based on the PuTTy interface and can be installed anywhere.
102
For more information about TCP/IP prerequisites see Chapter 3, Planning and configuration on page 63 and also the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824. In order to start an SVC initial configuration, a common flowchart that will cover all the types of management adopted will be similar to that shown in Figure 4-3.
103
In the next sections we describe each of the steps shown in Figure 4-3.
104
105
This replaces the functionality of the SVC Master Console (MC), which was a dedicated management console for the SVC. The Master Console is still supported and will run the latest code levels of the SVC Console software components. SSPC has all the software components pre-installed and tested on a System xTM machine model SSPC 2805-MC4 with Windows installed on it. All the software components installed on the SSPC can be ordered and installed on hardware that meets or exceeds minimum requirements. The SVC Console software components are also available on the Web. When using the SSPC with the SAN Volume Controller, you have to install it and configure it before configuring the SAN Volume Controller. For a detailed guide to the SSPC, we recommend that you refer to the IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823. For information pertaining to physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 63.
106
One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise Edition It is designed to perform basic SSPC functions. If you plan to upgrade SSPC for more functions, you can purchase the Performance Upgrade Kit to add more capacity to your hardware.
Figure 4-6 shows a rear view of SSPC Console based on the 2805-MC4 hardware.
107
Microsoft Windows Internet Explorer Version 7.0 Antivirus software (not required but strongly recommended) PuTTY Version 0.60 (if not installed) You can obtain the latest copy of PuTTY by going to the following Web site and downloading the Windows installer in the Binaries section: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Consideration: If you want to use IPv6, then you must be running Windows 2003 Server. SVC ICAT Application For a complete and current list of the supported software levels for the SVC Console, refer to the SVC Support page at: http://www.ibm.com/storage/support/2145
Determine the cabling required. Determine the network IP address. Determine the HMC host name. For detailed installation guidance, refer to the IBM System Storage SAN Volume Controller: Master Console Guide, SC27-2223 at the following link: http://www-01.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH&d c=DA400&q1=english&q2=-Japanese&uid=ssg1S7002609&loc=en_US&cs=utf-8&lang=en
4.4.1 Creating the cluster (first time) using the service panel
This section provides the step-by-step instructions needed to create the cluster for the first time using the service panel. Use Figure 4-7 as a reference for the SVC 2145-8F2 and 2145-8F4 node model buttons to be pushed in the steps that follow, Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4 model. and Figure 4-9 is a reference for the SVC Node 2145-CF8.
109
Figure 4-7 SVC 8F2 Node and SVC 8F4 Node front and operator panel
110
111
4.4.2 Prerequisites
Ensure that the SVC nodes are physically installed. Prior to configuring the cluster, ensure that the following information is available: License: The license indicates whether the customer is permitted to use FlashCopy, MetroMirror, or both. It also indicates how much capacity the customer is licensed to virtualize. For IPv4 addressing: Cluster IPv4 addresses: These include one for the cluster and another for the service address IPv4 Subnet mask Gateway IPv4 Address For IPv6 addressing: Cluster IPv6 addresses: These include one for the cluster and another for the service address IPv6 prefix Gateway IPv6 address
1. Choose any node that is to become a member of the cluster being created. 2. At the service panel of that node, click and release the Up or Down navigation button continuously until Node: is displayed. Important: If a timeout occurs when entering the input for the fields during these steps, you must begin again from step 2. All the changes are lost, so be sure to have all the information on hand before beginning. 3. Click and release the Left or Right navigation button continuously until Create Cluster? is displayed. Click the Select button. 4. If IPv4 Address: is displayed on line 1 of the service display, go to step 5. If Delete Cluster? is displayed in line 1 of the service display, this node is already a member of a cluster. Either the wrong node was selected, or this node was already used in a previous cluster. The ID of this existing cluster is displayed in line 2 of the service display. a. If the wrong node was selected, this procedure can be exited by clicking the Left, Right, Up, or Down button (it cancels automatically after 60 seconds). b. If it is certain that the existing cluster is not required, follow these steps: i. Click and hold the Up button. ii. Click and release the Select button. Then release the Up button. This deletes the cluster information from the node. Go back to step 1 and start again. Important: When a cluster is deleted, all client data contained in that cluster is lost. 5. If you are creating the cluster with IPv4, then click the Select button, otherwise for IPv6 press the down arrow to display IPv6 Address: and click the Select button. 6. Use the Up or Down navigation button to change the value of the first field of the IP address to the value that has been chosen. Note: For IPv4, pressing and holding the Up or Down buttons will increment or decrease the IP address field by units of 10. The field value rotates from 0 to 255 with the Down button, and from 255 to 0 with the Up button. For IPv6, you do the same except that it is a 4 digit hexadecimal field and the individual characters will increment. 7. Use the Right navigation button to move to the next field. Use the Up or Down navigation buttons to change the value of this field. 8. Repeat step 7 for each of the remaining fields of the IP address. 9. When the last field of the IP address has been changed, click the Select button. 10.Click the Right button. a. For IPv4, IPv4 Subnet: is displayed. b. For IPv6, IPv6 Prefix: is displayed. 11.Click the Select button. 12.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were changed. There is only a single field for IPv6 Prefix. 13.When the last field of IPv4 Subnet/IPv6 Mask has been changed, click the Select button.
113
14.Click the Right navigation button. a. For IPv4, IPv4 Gateway: is displayed. b. For IPv6, IPv6 Gateway: is displayed. 15.Click the Select button. 16.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address fields were changed. 17.When changes to all Gateway fields have been made, click the Select button. 18.Click the Right navigation button. a. For IPv4, IPv4 Create Now? is displayed. b. For IPv6, IPv6 Create Now? is displayed. 19.When the settings have all been verified as accurate, click the Select navigation button. To review the settings before creating the cluster, use the Right and Left buttons. Make any necessary changes, return to Create Now?, and click the Select button. If the cluster is created successfully, Password: is displayed in line 1 of the service display panel. Line 2 contains a randomly generated password, which is used to complete the cluster configuration in the next section. Important: Make a note of this password now. It is case sensitive. The password is displayed only for approximately 60 seconds. If the password is not recorded, the cluster configuration procedure must be started again from the beginning. 20.When Cluster: is displayed in line 1 of the service display and the Password: display timed out, then the cluster was created successfully. Also, the cluster IP address is displayed on line 2 when the initial creation of the cluster is completed. If the cluster is not created, Create Failed: is displayed in line 1 of the service display. Line 2 contains an error code. Refer to the error codes that are documented in IBM System Storage SAN Volume Controller: Service Guide, GC26-7901, to find the reason why the cluster creation failed and what corrective action to take. Important: At this time, do not repeat this procedure to add other nodes to the cluster. Adding nodes to the cluster is accomplished in 7.8.2, Adding a node on page 388 and in 8.10.3, Adding nodes to the cluster on page 559.
114
1. Open the GUI using one of the following methods: Double-click the icon marked SAN Volume Controller Console on the SVC Consoles desktop. Open a Web browser on the SVC Console and point to this address: http://localhost:9080/ica (We accessed the SVC Console using this method.) Open a Web browser on a separate workstation and point to this address: http://svcconsoleipaddress:9080/ica Figure 4-10 SVC 5.1 shows the SVC Welcome screen.
2. Click the Add SAN Volume Controller Cluster button and you will be presented with the screen shown in Figure 4-11.
115
Important: Do not forget to mark the field with Create Initialize Cluster. Without this flag you will not be able to Initialize the cluster and you will get the error message CMMVC5753E. Figure 4-12 shows the CMMVC5753E error.
3. Click OK and a pop-up window appears and prompts for the user ID and password of the SVC cluster, as shown in Figure 4-13. Enter the user ID admin and the cluster admin password that was set earlier in 4.4.1, Creating the cluster (first time) using the service panel on page 109 and click OK.
4. The browser accesses the SVC and displays the Create New Cluster wizard window, as shown in Figure 4-14. Click Continue.
116
5. At the Create New Cluster page (Figure 4-15), fill in the following details: A new superuser password to replace the random one that the cluster generated: The password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character, and a maximum of 15 characters. Note: Admin user previous used will be no longer needed. It will be replaced by the superuser user that will be created at the Cluster initialization time. That is because starting from SVC 5.1 the CIM Agent has been moved inside the SVC cluster. A service password to access the cluster for service operation: The password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character and a maximum of 15 characters. A cluster name: The cluster name is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character and a maximum of 15 characters. A service IP address to access the cluster for service operations. Choose between an automatically assigned IP address from DHCP or a static IP address. Note: The service IP address is different from the cluster IP address. However, because the service IP address is configured for the cluster, it must be on the same IP subnet. The fabric speed of the Fibre Channel network. The Administrator Password Policy check box, if selected, enables a user to reset the password from the service panel (this is helpful, for example, if the password is forgotten). This check box is optional.
117
Note: The SVC should be in a secure room if this function is enabled, because anyone who knows the correct key sequence can reset the admin password. The key sequence is as follows: a. From the Cluster: menu item displayed on the service panel, click the Left or Right button until Recover Cluster? is displayed. b. Click the Select button. Service Access? should be displayed. c. Click and hold the Up button and then click and release the Select button. This generates a new random password. Write it down. Important: Be careful, because clicking and holding the Down button, and clicking and releasing the Select button, places the node in service mode. 6. Once you have filled in the details, click the Create New Cluster button ( Figure 4-15).
118
Important: Make sure you confirm and retain in a safe place the Administrator and Service password for future use. 7. A Creating New Cluster window will appear, as shown in Figure 4-16. Click Continue each time when prompted.
119
8. A Created New Cluster window will appear as shown in Figure 4-17, click Continue.
9. A Password Changed window will confirm that the password has been modified as shown in Figure 4-18, click Continue.
Note: By this time, the service panel display on the front of the configured node should display the cluster name entered previously (for example, ITSO-CLS3). 10.Then you will be redirected to the License setting screen as show in Figure 4-19. choose the type of license as appropriate to your purchase and click GO to continue.
120
11.Next, he Featurization Settings window is displayed as shown in Figure 4-20. To continue, at a minimum the Virtualization Limit (Gigabytes) field must be filled out. If you are licensed for FlashCopy and Metro Mirror (the window reflects Remote Copy in this example), the Enabled radio buttons can also be selected here. Click the Set License Settings button.
12.A confirmation window will confirm what the featurization settings have been set to as is shown in Figure 4-21. Click Continue.
121
13.A window confirming that you have successfully created the initial settings for the cluster will appear as shown in Figure 4-22.
122
14.Closing the previous task screen by clicking on the X in the upper right corner will redirect you to the Viewing Clusters screen (the cluster will appear as unauthenticated). Selecting your cluster and clicking Go you will be asked to authenticate your access by inserting your predefined, superuser userid and password. Figure 4-23 shows the Viewing Clusters screen.
15.To complete the SVC Cluster configuration you have to perform the following steps: a. Add an additional node to the cluster b. Configure Secure Shell (SSH) keys for command line user as showed in section 4.6, Secure Shell overview and CIM Agent on page 123. c. Configure user authentication and authorization d. Set up Call home options e. Set up event notifications and inventory reporting f. Create Managed Disk Goups (MDG) g. Add MDisk to MDG h. Identify and create VDisks i. Create a map host objects map j. Identify and configure FlashCopy mappings, Metro Mirror relationship k. Back up configuration data All these steps are described in Chapter 7, SVC operations using the CLI on page 337, and Chapter 8, SVC operations using the GUI on page 469.
123
command-line interface (CLI) or the CIMOM. The connection is secured by the means of a private key and public key pair: A public key and a private key are generated together as a pair. A public key is uploaded to the SSH server. A private key identifies the client and is checked against the public key during the connection. The private key must be protected. The SSH server must also identify itself with a specific host key. If the client does not have that host key yet, it is added to a list of known hosts. Secure Shell is the communication vehicle between the management system (usually the SSPC) and the SVC cluster. SSH is a client-server network application. The SVC cluster acts as the SSH server in this relationship. The SSH client provides a secure environment from which to connect to a remote machine. It uses the principles of public and private keys for authentication. For more informations about SSH, go to: http://en.wikipedia.org/wiki/Secure_Shell The communication interfaces prior to SVC version 5.1 are shown in Figure 4-24.
SSH keys are generated by the SSH client software. This includes a public key, which is uploaded and maintained by the cluster, and a private key that is kept private to the workstation that is running the SSH client. These keys authorize specific users to access the administration and service functions on the cluster. Each key pair is associated with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored on the cluster. New IDs and keys can be added and unwanted IDs and keys can be deleted. To use the CLI or, for the SVC graphical user interface (GUI) prior to SVC 5.1, an SSH client must be installed on that system, the SSH key pair must be generated on the client system, and the clients SSH public key must be stored on the SVC cluster or clusters. The SSPC and the HMC have to have the freeware implementation of SSH-2 for Windows called PuTTY pre-installed. This software provides the SSH client function for users logged into the SVC Console who want to invoke the CLI or GUI to manage the SVC cluster. 124
SAN Volume Controller V5.1
Starting with SVC 5.1, the management design has been changed and the CIM agent has been moved into the SVC cluster. With SVC 5.1 SSH keys authentication is no longer needed for the GUI but only for the SVC Command Line Interface. Figure 4-25 shows the SVC management design.
4.6.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system. Note: These keys will be used in the step documented in 4.7, Using IPv6 on page 135. 1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client desktop, select Start Programs PuTTY PuTTYgen. 2. On the PuTTY Key Generator GUI window (Figure 4-26), generate the keys: a. Select the SSH2 RSA radio button. b. Leave the number of bits in a generated key value at 1024. c. Click Generate.
125
3. Move the cursor on the blank area in order to generate the keys. Note: The blank area indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labelled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This generates random characters to create a unique key pair. 4. After the keys are generated, save them for later use as follows: a. Click Save public key, as shown in Figure 4-27.
126
b. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save. If another name or location is chosen, ensure that a record of them is kept, because the name and location of this SSH public key must be specified in the steps documented in 4.6.2, Uploading the SSH public key to the SVC cluster on page 128. Note: The PuTTY Key Generator saves the public key with no extension by default. We recommend that you use the string pub in naming the public key, for example, pubkey, to easily differentiate the SSH public key from the SSH private key. c. In the PuTTY Key Generator window, click Save private key. d. You are prompted with a warning message, as shown in Figure 4-28. Click Yes to save the private key without a passphrase.
e. When prompted, enter a name (for example, icat) and location for the private key (for example, C:\Support Utils\PuTTY). Click Save. If you choose another name or location, ensure that you keep a record of it, because the name and location of the SSH private key must be specified when the PuTTY session is configured in the steps documented in 4.7, Using IPv6 on page 135.
Chapter 4. SVC initial configuration
127
We suggest to use the default name icat.ppk because in SVC clusters running on versions prior to SVC 5.1 this key has been used for icat application authentication and must have this default name. Note: The PuTTY Key Generator saves the private key with the PPK extension. 5. Close the PuTTY Key Generator GUI. 6. Navigate to the directory where the private key was saved (for example, C:\Support Utils\PuTTY). 7. Copy the private key file (for example, icat.ppk) to the C:\Program Files\IBM\svcconsole\cimom directory. Important: If the private key was named something other than icat.ppk, make sure that you rename it to icat.ppk in the C:\Program Files\IBM\svcconsole\cimom folder. The GUI (which will be used later) expects the file to be called icat.ppk and for it to be in this location. This key is no longer used in SVC 5.1 but is still valid for previous version.
2. From the Create a User screen insert the userid name you want to create, and the password. At the bottom of the screen select the Access Level you want to assign to your user (bear in mind that the Security Administrator is the maximum level) and choose the location where you want to upload the SSH pub key file you have created for this user as shown Figure 4-30. Click Ok.
128
3. You have completed your user creation process and uploaded the users SSH public key that will be paired later with the users private .ppk keys as described in 4.6.3, Configuring the PuTTY session for the CLI on page 129. Figure 4-31 shows the successful upload of the SSH admin key.
4. The basic setup requirements for the SVC cluster using the SVC cluster Web interface have now been completed.
129
1. From the SSPC Windows desktop, select Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window. 2. In the PuTTY Configuration window (Figure 4-32), from the Category pane on the left, click Session, if it is not selected. Note: The items selected in the Category pane affect the content that appears in the right pane.
3. In the right pane, under the Specify the destination you want to connect to section, select the SSH radio button. Under the Close window on exit section, select the Only on clean exit radio button. This ensures that if there are any connection errors, they will be displayed on the users screen. 4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to display the PuTTY SSH Configuration window, as shown in Figure 4-33.
130
5. In the right pane, in the section Preferred SSH protocol version, select radio button 2. 6. From the Category pane on the left side of the PuTTY Configuration window, select Connection SSH Auth. 7. In the right pane, in the Private key file for authentication: field under the Authentication Parameters section, either browse to or type the fully qualified directory path and file name of the SSH client private key file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK). See Figure 4-34.
131
8. From the Category pane on the left side of the PuTTY Configuration window, click Session. 9. In the right pane, follow these steps, as shown in Figure 4-35: a. Under the Load, save, or delete a stored session section, select Default Settings and click Save. b. For the Host Name (or IP address), type the IP address of the SVC cluster. c. In the Saved Sessions field, type a name (for example, SVC) to associate with this session. d. Click Save.
132
The PuTTY Configuration window can now either be closed or left open to continue. Tip: Normally, output that comes from the SVC is wider than the default PuTTY window size. We recommend that you change your PuTTY window appearance to use a font with a character size of 8. To do this, click the Appearance item in the Category tree, as shown in Figure 4-35, and then click Font. Choose a font with character size of 8.
133
4. If this is the first time the PuTTY application is being used since generating and uploading the SSH key pair, a PuTTY Security Alert window with a prompt pops up stating that there is a mismatch between the private and public keys, as shown in Figure 4-37. Click Yes, which invokes the CLI.
5. At the Login as: prompt, type admin and press Enter (the user ID is case sensitive). As shown in Example 4-1, the private key used in this PuTTY session is now authenticated against the public key uploaded to the SVC cluster.
Example 4-1 Authenticating login as: admin Authenticating with public key "rsa-key-20080617" Last login: Wed Aug 18 03:30:21 2009 from 10.64.210.240 IBM_2145:ITSO-CL1:admin>
134
You have now completed the tasks required to configure the CLI for SVC administration from the SVC Console. You can close the PuTTY session. Continue with the next section to configure the GUI on the SVC Console.
135
Note: To remotely access the SVC Console and clusters running IPv6, you are required to run Internet Explorer 7 and have IPv6 configured on your local workstation.
Ethernet adapter IPv6: Connection-specific IP Address. . . . . Subnet Mask . . . . IP Address. . . . . IP Address. . . . . Default Gateway . . DNS . . . . . . . . . . Suffix . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : :
2. In the IPv6 section Figure 4-38: a. Select an IPv6 interface and click modify, then in the screen in Figure 4-39.
136
b. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. c. Type an IPv6 address in the Cluster IP field. d. Type an IPv6 address in the Service IP address field. e. Type an IPv6 gateway in the Gateway field. f. Click the Modify Settings button.
3. A confirmation window displays (Figure 4-40). You can click the X, on the top right hand corner, to close this tab.
4. Before you remove the cluster from the SVC Console, you should test IPv6 connectivity using the ping command from a cmd.exe session on the SSPC (as shown in Example 4-3).
Example 4-3 Testing IPv6 connectivity to SVC Cluster C:\Documents and Settings\Administrator>ping 2001:0610:0000:0000:0000:0000:0000:119
137
Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data: Reply Reply Reply Reply from from from from 2001:610::119: 2001:610::119: 2001:610::119: 2001:610::119: time=3ms time<1ms time<1ms time<1ms
Ping statistics for 2001:610::119: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 3ms, Average = 0ms
5. In the Viewing Clusters pane, in the GUI Welcome window, select the radio button next to the cluster you want to remove. Select Remove a Cluster from the drop-down menu and click Go. 6. The Viewing Clusters window will re-appear, without the cluster you have removed. Select Add a Cluster from the drop-down menu and click OK (Figure 4-41).
7. The Adding a Cluster screen will appear, enter your IPv6 address will appear as shown in Figure 4-42, click Ok.
8. You will be asked to insert your CIM userid (superuser) and your password (default=passw0rd) as shown in Figure 4-43.
138
9. The Viewing Clusters window will re-appear with the cluster displaying an IPv6 address, as shown in Figure 4-44. Launch the SAN Volume Controller Console for the cluster and go back into Modify IP Address, as you did in step 1.
Figure 4-44 Viewing Clusters window - Displaying new cluster using IPv6 address
10.In the Modify IP addresses window, select the IPv4 address port and select Clear Port Settings as shown Figure 4-45.
139
11.A confirmation window will display, as shown in Figure 4-46. Click OK.
12.A second window (Figure 4-47) will display, confirming the IPv4 stack has been disabled and the associated addresses have been removed. Click Return.
140
sergrp_id
3. Execute the setup.exe file from the location where you have saved and unzipped the latest SVC Console file. Figure 4-48 shows the location of the setup.exe on our system.
141
4. The Installation wizard will start. This first window (as shown in Figure 4-49) asks you to: Shut down any running Windows programs Stop all SVC services Review the README file Figure 4-49 Shows how to stop SVC services.
142
Once you are ready, click Next. 6. The Installation will ask for the license agreement as shown in Figure 4-51.
143
7. The installation should detect your existing SVC Console installation (if you are upgrading). If it does, it will ask you to: Select Preserve Configuration if you want to keep your old configuration. (You should make sure that this is checked.) Manually shut down the SVC Console services, namely: IBM System Storage SAN Volume Controller Pegasus Server Service Location Protocol IBM WebSphere Application Server V6 - SVC
There may be differences in the existing services, depending on the version you are upgrading from. Follow the instructions on the dialog wizard for which services to shut down, as shown in Figure 4-52.
144
Important: If you want to keep your SVC configuration, then make sure you check the Preserve Configuration check box. If you omit this, you will lose your entire SVC Console setup, and you will have to reconfigure your console as though it were a fresh install. 8. The installation wizard will then check that the appropriate services are shut down, will remove the previous version and take you to the Installation Confirmation window shown in Figure 4-53. If the wizard detects any problems, it first shows you a page detailing the possible problems, giving you time to fix them before proceeding.
145
9. The progress of the installation is shown in Figure 4-54. For our environment, it took approximately 10 minutes to complete.
10.The installation process will now start the migration for cluster user accounts. Starting with SVC 5.1 the CIMOM has been moved into the cluster and it is no longer present in the SVC console or SSPC. The CIMOM authentication login process will be performed in the ICA application when we launch the SVC management application.
146
As part of the migration input, Figure 4-55 shows where to input the the admin password to each of the clusters you originally own. This password was generated during the SVC cluster first creation and should be saved safely.
11.At the end of the user accounts migration process you may get the error as shown in Figure 4-56.
147
This is normal behaviour because in our environment we have implemented just the superuser userid. The GUI upgrade wizard is intended to work just for user accounts it is not intended to be used for migrating the superuser user. If you do get this error, when you try to access your SVC cluster using the GUI, it will require you to enter the default CIMOM userid=superuser and password=passw0rd. That is because the superuser account has not been migrated and you will have to use the default in the meantime. 12.Clicking Next, the wizard will either restart all the appropriate SVC Console processes, or inform you that you will need to reboot, and then give you a summary of the installation. In this case, we were told we will need a reboot, as shown in Figure 4-57.
148
14.And finally, to see what the new interface looks like, you can launch the SVC Console by using the icon on the desktop. Login and confirm that the upgrade was successful by
149
noting the Console Version number on the right hand side of the window (under the graphic). See Figure 4-59.
You have now completed the upgrade of your SVC Console. To access the SVC you have to click on Clusters on the left pane and you will be redirected to the Viewing Clusters screen as shown in Figure 4-60.
150
As you can see the cluster is Unauthenticated. This is to be expected. Select the cluster, Click GO and launch the SAN Volume Controller Application, and you will be required to insert your CIMOM userid (superuser) and your password (password) as shown in Figure 4-61.
Finally, you can manage your SVC cluster as shown in Figure 4-62.
151
152
Chapter 5.
Host configuration
In this chapter, we describe the basic host configuration procedures required to connect supported hosts to the IBM System Storage SAN Volume Controller (SVC).
153
in the fabric and this means that the server and the SVC node may be separated by up to 5 actual FC links, four of which can be 10km long if long-wave SFPs are used. For high performance servers the basic rule is to avoid ISL hops, i.e. connect them to the same switch as the SVC is connected to, if possible. The following two limits have to be kept in mind when connecting host servers to an SVC: Up to 256 hosts per I/O group. This ends up in a total of 1024 hosts per cluster. Note that if the same host is connected to multiple I/O Groups of a cluster it counts as a host in each of these groups. 512 distinct configured Host WWPNs are supported per I/O Group. This limit is the sum of FC host ports and host iSCSI names (for each iSCSI name an internal WWPN is generated) associated with all of the hosts associated with a single I/O group. The access from a server to an SVC cluster via the SAN fabrics is defined by the use of zoning. The basic rules for host zoning to follow with the SVC are: For configurations of less than 64 hosts per cluster, the SAN Volume Controller supports a simple set of zoning rules that enable a small set of host zones to be created for different environments. Switch zones containing host HBAs must contain no more than 40 initiators in total including the SVC ports which act as initiators. Thus a valid zone would be 32 host ports plus 8 SVC ports. This restriction exists because the order N2 scaling of number of remote status change notification messages (RSCN) with number of initiators per zone [N] can cause problems. It is recommended however to zone using single HBA port zoning as described in the next paragraph. For configurations of more than 64 hosts per cluster, the SAN Volume Controller supports a more restrictive set of host zoning rules. Each HBA port must be placed into a separate zone. Also included in this zone should be exactly one port from each SVC node in the I/O Group(s) which are associated with this host. For configurations smaller than this it is recommended that hosts be zoned this way but it is not mandatory. Switch zones containing Host HBAs must contain host HBAs from similar hosts or similar HBAs in the same host. For example AIX and NT hosts must be in separate zones, as must QLogic and Emulex adapters. To obtain the best performance from a host with multiple FC ports the zoning should ensure that each FC port of a host is zoned with a different group of SVC ports. To obtain the best overall performance of the subsystem and to prevent overloading the workload to each SVC port should be equal. This will typically involve zoning approximately the same number of host FC ports to each SVC FC port. For any given VDisk, the number of paths through the SAN from the SVC nodes to a host must not exceed eight. For most configurations four paths to an I/O Group, i.e four paths to each VDisk provided by this I/O Group, is sufficient. Figure 5-2 shows an overview for a basic setup with servers having two single port HBAs each. The simplest way to connect them is: Try to distribute the actual hosts equally between two logical sets per I/O Group. Connect hosts from each set always to the same group of SVC ports. Such a port group includes exactly one port from each SVC node in the I/O Group. The definition of the correct connections are done by zoning. The port groups are defined as follows: Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both nodes, e.g. N1/N2 of I/O Group zero Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both nodes of I/O Group
Chapter 5. Host configuration
155
For these port groups aliases (per I/O Group) can be created: Fabric A: IOGRP0_PG1 --> N1_P1; N2_P1,IOGRP0_PG2 --> N1_P3;N2_P3 Fabric B: IOGRP0_PG1 --> N1_P4;N2_P4, IOGRP0_PG2 --> N1_P2;N2_P2 Creating host zones by always using the host port WWPN plus the PG1 alias for hosts in the first host set. Use always the host port WWPN plus the PG2 alias for hosts from the second host set. If a host has to be zoned to multiple I/O Groups simply add the PG1 or PG2 aliases from the specific I/O Groups to the host zone. Using this schema provides per host four paths to one I/O Group. It helps to maintain an equal distribution of host connections on the SVC ports. Figure 5-2 shows an overview of this host zoning schema.
We recommend whenever possible to use the minimum number of paths that are necessary to achieve sufficient redundancy in the SAN environment. For SVC environments this means no more than four paths per I/O Group or VDisk. Keep in mind that all paths have to be managed by the multipath driver on the host side. If we assume a server connected via four ports to the SVC, then each VDisk is seen via eight paths. With 125 VDisks mapped to this server this means that the multipath driver has to support the handling of up to 1000 active paths (8*125). Details and current limitations for IBMs Subsystem Device Driver (SDD) can be found in: Storage Multipath Subsystem Device Driver Users Guide, GC52-1309-01 which can be downloaded from: http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1
156
For hosts using four HBA/ports with eight connections to an I/O Group the zoning schema shown in Figure 5-3 can be used. This schema can be combined with the above mentioned four path zoning schema.
157
5.2.2 Nodes
There are one or more iSCSI Nodes within a Network Entity. The iSCSI Node is accessible via one or more Network Portals. A Network Portal is a component of a Network Entity that has a TCP/IP network address and that may be used by an iSCSI Node. An iSCSI Node is identified by its unique iSCSI name and is referred to as an IQN. Keep in mind that this name serves only for the identification of the node it is not the nodes address and in iSCSI the name is separated from the addresses. This separation allows multiple iSCSI nodes to use the same addresses, or as it is implemented in the SVC, the same iSCSI node to use multiple addresses.
5.2.3 IQN
An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its own IQN which by default will be in the form: iqn.1986-03.com.ibm:2145.<clustername>.<nodename>. An iSCSI host in SVC is defined by specifying its iSCSI initiator name(s). An example for an IQN of a Windows Server could be: iqn.1991-05.com.microsoft:itsoserver01 During the configuration of an iSCSI host in the SVC the hosts initiator IQNs have to be specified. Details of how a host is created can be found in Chapter 7, SVC operations using the CLI on page 337, and Chapter 8, SVC operations using the GUI on page 469. An alias string may also be associated with an iSCSI Node. The alias allows an organization to associate a user friendly string with the iSCSI Name. However, the alias string is not a substitute for the iSCSI Name. An overview of how iSCSI is implemented in the SVC is shown in Figure 5-4.
158
A host that is using iSCSI as the communication protocol to access its VDisks on an SVC cluster uses its single or multiple Ethernet adapters to connect to an IP LAN. The nodes of the SVC cluster are connected to the LAN by the existing 1 Gbps Ethernet ports on the node. For iSCSI both ports can be used. Note that Ethernet link aggregation (port trunking) or channel bonding for the SVC nodes Ethernet ports are not supported for the 1Gbps ports in this release. The support for Jumbo Frames, i.e. support for MTU sizes greater than 1500 Bytes, is planned in future SVC releases. For each SVC node, i.e. for each instance of an iSCSI target node in the SVC node, two IPv4 and/or two IPv6 addresses or iSCSI Network portals, can be defined. One IPv4 and/or one IPv6 address per Ethernet Port as shown in Figure 2-12 on page 29.
159
The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260). The Network Portal IP addresses of the iSCSI targets have to be defined before a discovery can be started.
5.4 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable CHAP authentication. This involves the sharing of a CHAP secret between the cluster and the host. If the correct key is not provided by the host, then the SVC will not allow it to do I/O to VDisks. The cluster can also be assigned a CHAP secret. A new feature with iSCSI is the fact that IP addresses used to address an iSCSI target on the SVC node can be moved between the nodes of an I/O Group. IP addresses will only be moved from one node to its partner node if a nodes goes through a planned or unplanned restart. If the ethernet link to the SVC cluster fails because of a cause outside of the SVC (such as the cable being disconnected, the ethernet router failing, and so on) then the SVC makes no attempt to fail over an IP address to restore IP access to the cluster. To enable validation of the ethernet access to the nodes, it will respond to ping with the standard one-per-second rate without frame loss. With the SVC 5.1 release a new concept which is used for the handling of the iSCSI IP address failover, called a "clustered ethernet port", was introduced. A clustered ethernet port consists of one physical ethernet port on each node in the cluster and contains configuration settings that are shared by all these ports. These clustered ports are referred to as Port 1 and Port 2 in the CLI or GUI on each node of an SVC cluster. Clustered Ethernet ports can be used for iSCSI and/or management ports. An example of a iSCSI target node failover is shown in Figure 5-5. It gives a simplified overview of what happens during a planned or unplanned node restart in an SVC I/O Group: 1. During normal operation one iSCSI node target node instance is running on each SVC node. All the IP addresses (ipv4/ipv6) belonging to this iSCSI target, including the management addresses if the node acts as configuration node, are presented on the two ports (P1/P2) of a node. 2. During a restart of an SVC node (N1) the iSCSI initiator including all its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP addresses (if N1 acted as configuration node), will failover to Port1/Port2 of the partner node within the I/O Group, i.e, node N2. An iSCSI initiator running on a server will execute a reconnect to its iSCSI target, i.e. the same IP addresses presented now by a new node of the SVC cluster. 3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP addresses) running on N2 will failback to N1. Again the iSCSI initiator running on a server will execute a reconnect to its iSCSI target. The management addresses will not failback. N2 will remain in the role of the configuration node for this cluster.
160
From the servers point of view it is not required to have a multipathing driver (MPIO) in place to be able to handle an SVC node failover. In the case of a node restart the server simply reconnects to the IP addresses of the iSCSI target node that will re-appear after several seconds on the ports of the partner node. A host multipathing driver for iSCSI is required if you want: To protect a server from network link failures including port failures on the SVC nodes. To protect a server from a server HBA failure (if two HBAs are in use). To protect a server form network failures, if the server is connected via 2 HBAs to two separate networks. To provide load balancing on the servers HBA and the network links. The commands for the configuration of the iSCSI IP addresses has been separated from the configuration of the cluster IP addresses. The new commands for managing iSCSI IP addresses are: The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on each node in the cluster. The svctask cfgportip command assigns an IP address to each node ethernet port for iSCSI I/O. The new commands for managing the cluster IP addresses are: The svcinfo lsclusterip command returns a list of the cluster management IP addresses configured for each port. The svctask chclusterip command modifies the IP configuration parameters for the cluster. A detailed description of how to use these commands can be found in Chapter 7, SVC operations using the CLI on page 337.
Chapter 5. Host configuration
161
The parameters for remote services (ssh, web services) will remain associated with the cluster object. During a software upgrade from 4.3.1 the configuration settings for the cluster will be used to configure clustered Ethernet Port1. For iSCSI based access using two separate networks and separating iSCSI traffic within the networks by using dedicated VLANs path for storage traffic, will prevent any IP interface, switch or target port failure from compromising the host servers access to the VDisks LUNs.
162
163
2. Select Host Attachment Scripts for AIX. 3. Select either Host Attachment Script for SDDPCM or Host Attachment Scripts for SDD from the options, depending on your multipath device driver. 4. Download the AIX host attachment script for your multipath device driver. 5. Follow the instructions that are provided on the Web site or any readme files to install the script.
2. Issue the following command to enable dynamic tracking for each Fibre Channel device: chdev -l fscsi0 -a dyntrk=yes The previous example command was for adapter fscsi0. Example 5-2 shows the command for both adapters on our test system running AIX 5L V5.3.
Example 5-2 Enable dynamic tracking
#lsdev -Cc adapter |grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter
164
You can find the worldwide port number (WWPN) of your FC Host Adapter and check the firmware level, as shown in Example 5-4. The Network Address is the WWPN for the FC adapter.
Example 5-4 FC Host Adapter settings and WWPN
U0.1-P2-I4/Q1
FC Adapter
Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1
165
SDD works by grouping each physical path to an SVC LUN, represented by individual hdisk devices within AIX, into a vpath device (for example, if you have four physical paths to an SVC LUN, this produces four new hdisk devices within AIX). From this moment onwards, AIX uses this vpath device to route I/O to the SVC LUN. Therefore, when making an LVM volume group using mkvg, we specify the vpath device as the destination and not the hdisk device. The SDD support matrix for AIX is available at: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX
SDD installation
In Example 5-5, we show the appropriate version of SDD downloaded into the /tmp/sdd directory. From here we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDD. Finally, we initiate the installp command, which installs SDD onto this AIX host.
Example 5-5 Installing SDD on AIX
#ls -l total 3032 -rw-r----1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar #tar -tvf devices.sdd.53.rte.tar -rw-r----0 0 1536000 Oct 06 11:37:13 2006 devices.sdd.53.rte #tar -xvf devices.sdd.53.rte.tar x devices.sdd.53.rte, 1536000 bytes, 3000 media blocks. # inutoc . #ls -l total 6032 -rw-r--r-1 root system 476 Jun 24 15:33 .toc -rw-r----1 root system 1536000 Oct 06 2006 devices.sdd.53.rte -rw-r----1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar # installp -ac -d . all Example 5-6 checks the installation of SDD.
Example 5-6 Checking SDD device driver
1.7.0.0 1.7.0.0
COMMITTED COMMITTED
Note: There no longer exists a specific 2145 devices.fcp file. The standard devices.fcp now has combined support for SVC / ESS / DS8000 / DS6000. We can also check that the SDD server is operational, as shown in Example 5-7 on page 167.
166
Group
PID 168430
Status active
#ps -aef | grep sdd root 135174 41454 root 168430 127292 /usr/sbin/sddsrv
0 15:38:20 0 15:10:27
pts/1 -
Enabling the SDD or SDDPCM Web interface is shown in 5.15, Using SDDDSM, SDDPCM, and SDD Web interface on page 250.
SDDPCM installation
In Example 5-8, we show the appropriate version of SDDPCM downloaded into the /tmp/sddpcm directory. From here we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX host.
Example 5-8 Installing SDDPCM on AIX
# ls -l total 3232 -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # tar -tvf devices.sddpcm.61.rte.tar -rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte # tar -xvf devices.sddpcm.61.rte.tar x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks. # inutoc . # ls -l total 6432 -rw-r--r-1 root system 531 Jul 15 13:25 .toc -rw-r----1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # installp -ac -d . all Example 5-9 checks the installation of SDDPCM.
Example 5-9 Checking SDDPCM device driver
2.2.0.0 2.2.0.0
COMMITTED COMMITTED
IBM SDD PCM for AIX V61 IBM SDD PCM for AIX V61
Enabling the SDD or SDDPCM Web interface is shown in 5.15, Using SDDDSM, SDDPCM, and SDD Web interface on page 250.
167
5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3
Before adding a new volume from the SVC, the AIX host system Kanga had a vanilla configuration, as shown in Example 5-10.
Example 5-10 Status of AIX host system Kanaga
In Example 5-11, we show SVC configuration information relating to our AIX host, specifically, the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this configuration. Using the SVC CLI, we can check that the host WWPNs, listed in Example 5-4 on page 165, are logged into the SVC for the host definition aix_test, by entering: svcinfo lshost aix_test We can also find the serial numbers of the VDisks using the following command: svcinfo lshostvdiskmap
Example 5-11 SVC definitions for host system aix_test
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Kanaga id 2 name Kanaga port_count 2 type generic mask 1111 iogrp_count 2 WWPN 10000000C932A7FB node_logged_in_count 2 state active WWPN 10000000C932A800 node_logged_in_count 2 state active IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Kanaga id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 Kanaga 0 13 Kanaga0001 10000000C932A7FB 60050768018301BF2800000000000015 2 Kanaga 1 14 Kanaga0002 10000000C932A7FB 60050768018301BF2800000000000016 2 Kanaga 2 15 Kanaga0003 10000000C932A7FB 60050768018301BF2800000000000017 2 Kanaga 3 16 Kanaga0004 10000000C932A7FB 60050768018301BF2800000000000018 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0001 id 13 name Kanaga0001
168
IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 5.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000015 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status offline sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 5.00GB real_capacity 5.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap Kanaga0001 id name SCSI_id host_id host_name wwpn vdisk_UID 13 Kanaga0001 0 2 Kanaga 10000000C932A7FB 60050768018301BF2800000000000015 13 Kanaga0001 0 2 Kanaga 10000000C932A800 60050768018301BF2800000000000015
169
We need to run cfgmgr on the AIX host to discover the new disks and enable us to start the vpath configuration; if we run the config manager (cfgmgr) on each FC adapter, it will not create the vpaths, only the new hdisks. To configure the vpaths, we need to run the cfallvpath command after issuing the cfgmgr command on each of the FC adapters: # cfgmgr -l fcs0 # cfgmgr -l fcs1 # cfallvpath Alternatively, use the cfgmgr -vS command to check the complete system. This command will probe the devices sequentially across all FC adapters and attached disks; however, it is very time intensive: # cfgmgr -vS The raw SVC disk configuration of the AIX host system now appears as shown in Example 5-12. We can see the multiple hdisk devices, representing the multiple routes to the same SVC LUN, and we can see the vpath devices available for configuration.
Example 5-12 VDisks from SVC added with multiple different paths for each VDisk
#lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available hdisk8 Available hdisk9 Available hdisk10 Available hdisk11 Available hdisk12 Available hdisk13 Available hdisk14 Available hdisk15 Available hdisk16 Available hdisk17 Available hdisk18 Available vpath0 Available vpath1 Available vpath2 Available vpath3 Available
1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1Z-08-02 1Z-08-02 1Z-08-02 1Z-08-02 1D-08-02 1D-08-02 1D-08-02 1D-08-02 1Z-08-02 1Z-08-02 1Z-08-02 1Z-08-02 1D-08-02 1D-08-02 1D-08-02 1D-08-02
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device Data Path Optimizer Pseudo Device Data Path Optimizer Pseudo Device Data Path Optimizer Pseudo Device Data Path Optimizer Pseudo Device
To make a volumegroup (for example, itsoaixvg) to host the vpath1 device, we use the mkvg command passing the vpath device as a parameter instead of the hdisk device. This is shown in Example 5-13.
Example 5-13 Running the mkvg command
#mkvg -y itsoaixvg vpath1 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg
170
Now, by running the lspv command, we can see that vpath1 has been assigned into the itsoaixvg volume group, as shown in Example 5-14.
Example 5-14 Showing the vpath assignment into the volume group
The lsvpcfg command also displays the new relationship between vpath1 and the itsoaixvg volume group, but also shows each hdisk associated to vpath1, as shown in Example 5-15.
Example 5-15 Displaying the vpath to hdisk to volume group relationship #lsvpcfg vpath0 (Avail vpath1 (Avail (Avail ) vpath2 (Avail vpath3 (Avail ) 60050768018301BF2800000000000015 = hdisk3 (Avail ) hdisk7 (Avail ) pv itsoaixvg) 60050768018301BF2800000000000016 = hdisk4 (Avail ) hdisk8 ) 60050768018301BF2800000000000017 = hdisk5 (Avail ) hdisk9 (Avail ) ) 60050768018301BF2800000000000018 = hdisk6 (Avail ) hdisk10 (Avail )
In Example 5-16, running the command lspv vpath1 shows a more verbose output for vpath1.
Example 5-16 Verbose details of vpath1
#lspv vpath1 PHYSICAL VOLUME: vpath1 VOLUME GROUP: PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: FREE PPs: 639 (5112 megabytes) HOT SPARE: USED PPs: 0 (0 megabytes) MAX REQUEST: FREE DISTRIBUTION: 128..128..127..128..128 USED DISTRIBUTION: 00..00..00..00..00
itsoaixvg
171
#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 0 56 Errors 0 0 Paths 4 4 Active 1 1
In Example 5-18, we see detailed information about each vpath device. Initially, we see that vpath1 is the only vpath device in an open status. This is because it is the only vpath currently assigned to a volume group. Additionally, for vpath1, we see that only path #1 and path #3 have been selected (used) by SDD. This is because these are the two physical paths that connect to the preferred node of the I/O group of this SVC cluster. The remaining two paths within this vpath device are only accessed in a failover scenario.
Example 5-18 SDD commands used to check the availability of the devices
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000015 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 CLOSE NORMAL 0 0 1 fscsi1/hdisk7 CLOSE NORMAL 0 0 2 fscsi0/hdisk11 CLOSE NORMAL 0 0 3 fscsi1/hdisk15 CLOSE NORMAL 0 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000016 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 0 0 1 fscsi1/hdisk8 OPEN NORMAL 28 0 2 fscsi0/hdisk12 OPEN NORMAL 32 0 3 fscsi1/hdisk16 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000017 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk5 CLOSE NORMAL 0 0 1 fscsi1/hdisk9 CLOSE NORMAL 0 0 2 fscsi0/hdisk13 CLOSE NORMAL 0 0 3 fscsi1/hdisk17 CLOSE NORMAL 0 0 172
SAN Volume Controller V5.1
DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000018 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk6 CLOSE NORMAL 0 0 1 fscsi1/hdisk10 CLOSE NORMAL 0 0 2 fscsi0/hdisk14 CLOSE NORMAL 0 0 3 fscsi1/hdisk18 CLOSE NORMAL 0 0
5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD
The volume group itsoaixvg is created using vpath1, A logical volume is created using the volume group and then the file system created, testlv1, and mounted on the mount point /testlv1, as seen in Example 5-19.
Example 5-19 Host system new volume group and file system configuration
#lsvg -o itsoaixvg rootvg #lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE loglv01 jfs2log fslv00 jfs2 fslv01 jfs2 #df -g Filesystem GB blocks /dev/hd4 0.03 /dev/hd2 9.06 /dev/hd9var 0.03 /dev/hd3 0.12 /dev/hd1 0.03 /proc /dev/hd10opt 0.09 /dev/lv00 0.41 /dev/fslv00 2.00 /dev/fslv01 2.00
PVs 1 1 1
Free %Used 0.01 62% 4.32 53% 0.03 10% 0.12 7% 0.03 2% 0.01 86% 0.39 4% 2.00 1% 2.00 1%
Iused %Iused Mounted on 1357 31% / 17341 2% /usr 137 3% /var 31 1% /tmp 11 1% /home - /proc 1947 38% /opt 19 1% /usr/sys/inst.images 4 1% /teslv1 4 1% /teslv2
5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM
Before adding a new volume from the SVC, the AIX host system Atlantic had a vanilla configuration, as shown in Example 5-20.
Example 5-20 Status of AIX host system Kanaga
173
In Example 5-22 on page 175, we show SVC configuration information relating to our AIX host, specifically the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this configuration. Our example host is named Atlantic. Example 5-21 shows the HBA information of our example host.
Example 5-21 HBA information example host Atlantic
## lsdev -Cc adapter | grep fcs fcs1 Available 1H-08 FC Adapter fcs2 Available 1D-08 FC Adapter # lscfg -vpl fcs1 fcs1 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A644 Manufacturer................001E Customer Card ID Number.....2765 FRU Number.................. 00P4495 Network Address.............10000000C932A865 ROS Level and ID............02C039D0 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401411 Device Specific.(Z5)........02C039D0 Device Specific.(Z6)........064339D0 Device Specific.(Z7)........074339D0 Device Specific.(Z8)........20000000C932A865 Device Specific.(Z9)........CS3.93A0 Device Specific.(ZA)........C1D3.93A0 Device Specific.(ZB)........C2D3.93A0 Device Specific.(ZC)........00000000 Hardware Location Code......U0.1-P2-I4/Q1
PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 ## lscfg -vpl fcs2 fcs2 U0.1-P2-I5/Q1 FC Adapter Part Number.................80P4383 EC Level....................A Serial Number...............1F5350CD42 Manufacturer................001F Customer Card ID Number.....2765 FRU Number.................. 80P4384 Network Address.............10000000C94C8C1C
174
ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C94C8C1C Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(ZC)........00000000 Hardware Location Code......U0.1-P2-I5/Q1
PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 # Using the SVC CLI, we can check that the host WWPNs, as listed in Example 5-22, are logged into the SVC for the host definition Atlantic, by entering: svcinfo lshost Atlantic We can also find the serial numbers of the VDisks using the following command: svcinfo lshostvdiskmap Atlantic
Example 5-22 SVC definitions for host system Atlantic
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic id 8 name Atlantic port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C94C8C1C node_logged_in_count 2 state active WWPN 10000000C932A865 node_logged_in_count 2 state active IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic id name SCSI_id vdisk_id wwpn vdisk_UID 8 Atlantic 0 14 10000000C94C8C1C 6005076801A180E90800000000000060 8 Atlantic 1 22 10000000C94C8C1C 6005076801A180E90800000000000061
175
Atlantic0003
We need to run cfgmgr on the AIX host to discover the new disks and enable us to use the disks: # cfgmgr -l fcs1 # cfgmgr -l fcs2 Alternatively, use the cfgmgr -vS command to check the complete system. This command will probe the devices sequentially across all FC adapters and attached disks; however, it is very time intensive: # cfgmgr -vS The raw SVC disk configuration of the AIX host system now appears as shown in Example 5-23. We can see the multiple MPIO FC 2145 devices, representing the SVC LUN.
Example 5-23 VDisks from SVC added with multiple different paths for each VDisk
# lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive MPIO FC 2145 MPIO FC 2145 MPIO FC 2145
To make a volumegroup (for example, itsoaixvg) to host the LUNs, we use the mkvg command passing the device as a parameter. This is shown in Example 5-24.
Example 5-24 Running the mkvg command
# mkvg -y itsoaixvg hdisk3 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg # mkvg -y itsoaixvg1 hdisk4 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg1 # mkvg -y itsoaixvg2 hdisk5 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg2 Now, by running the lspv command, we can see the disks and the assigned volume groups, as shown in Example 5-25.
Example 5-25 Showing the vpath assignment into the volume group
176
In Example 5-26, we show that running the command lspv hdisk3 shows a more verbose output for one of the SVC LUNs.
Example 5-26 Verbose details of vpath1
# lspv hdisk3 PHYSICAL VOLUME: hdisk3 VOLUME GROUP: PV IDENTIFIER: 0009cdca28b589f5 VG IDENTIFIER 0009cdca00004c000000011b28b58ae2 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 511 (4088 megabytes) VG DESCRIPTORS: FREE PPs: 511 (4088 megabytes) HOT SPARE: USED PPs: 0 (0 megabytes) MAX REQUEST: FREE DISTRIBUTION: 103..102..102..102..102 USED DISTRIBUTION: 00..00..00..00..00 #
itsoaixvg
# pcmpath query adapter Active Adapters :2 Adpt# 0 1 Name fscsi1 fscsi2 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 407 425 Errors 0 0 Paths 6 6 Active 6 6
From Example 5-28, we see detailed information about each MPIO device. The * next to the path numbers show which paths have been selected (used) by SDDPCM. This is because these are the two physical paths that connect to the preferred node of the I/O group of this SVC cluster. The remaining two paths within this MPIO device are only accessed in a failover scenario.
Example 5-28 SDDPCM commands used to check the availability of the devices
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000060 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 152 0 1* fscsi1/path1 OPEN NORMAL 48 0 2* fscsi2/path2 OPEN NORMAL 48 0
Chapter 5. Host configuration
177
fscsi2/path3
OPEN
NORMAL
160
DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000061 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi1/path0 OPEN NORMAL 37 0 1 fscsi1/path1 OPEN NORMAL 66 0 2 fscsi2/path2 OPEN NORMAL 71 0 3* fscsi2/path3 OPEN NORMAL 38 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000062 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 66 0 1* fscsi1/path1 OPEN NORMAL 38 0 2* fscsi2/path2 OPEN NORMAL 38 0 3 fscsi2/path3 OPEN NORMAL 70 0 #
5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The volume group itsoaixvg is created using hdisk3, A logical volume is created using the volume group and then the file system is created, testlv1, and mounted on the mount point /testlv1, as seen in Example 5-29.
Example 5-29 Host system new volume group and file system configuration
# lsvg -o itsoaixvg2 itsoaixvg1 itsoaixvg rootvg # crfs -v jfs2 -g itsoaixvg -a size=3G File system created successfully. 3145428 kilobytes total disk space. New File System size is 6291456 # lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE LPs loglv00 jfs2log 1 fslv00 jfs2 384 #
-m /itsoaixvg -p rw -a agblksize=4096
PPs 1 384
PVs 1 1
removed, which means the FlashCopy, Metro Mirror, or Global Mirror on that VDisk has to be stopped before it is possible to expand the VDisk. The following steps show how to expand a volume on an AIX host, where the volume is a VDisk from the SVC: 1. To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. Example 5-30 shows the VDisk Kanga0002 that we have allocated to our AIX server before we expand it. Here, the capacity is 5 GB, and the vdisk_UID is 60050768018301BF2800000000000016.
Example 5-30 Expanding a VDisk on AIX
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002 id 14 name Kanaga0002 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 5.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000016 throttling 0 preferred_node_id 2 fast_write_state not_empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 5.00GB real_capacity 5.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize
179
2. To identify which vpath this VDisk is associated to on the AIX host, we use the SDD command datapath query device, as shown in Example 5-19 on page 173. Here we can see that the VDisk with vdisk_UID 60050768018301BF2800000000000016 is associated to vpath1 as the vdisk_UID matches the SERIAL field on the AIX host. 3. To see the size of the volume on the AIX host, we use the lspv command, as shown in Example 5-31. This shows that the volume size is 5112 MB, equal to 5 GB, as shown in Example 5-30 on page 179.
Example 5-31 Finding the size of the volume in AIX
#lspv vpath1 PHYSICAL VOLUME: vpath1 VOLUME GROUP: PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: FREE PPs: 0 (0 megabytes) HOT SPARE: USED PPs: 639 (5112 megabytes) MAX REQUEST: FREE DISTRIBUTION: 00..00..00..00..00 USED DISTRIBUTION: 128..128..127..128..128
itsoaixvg
4. To expand the volume on the SVC, we use the command svctask expandvdisksize to increase the capacity on the VDisk. In Example 5-32, we expand the VDisk by 1GB.
Example 5-32 Expanding a VDisk
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 1 -unit gb Kanaga0002 5. To check that the VDisk has been expanded, use the svcinfo lsvdisk command. Here we can see that the VDisk Kanaga0001 has been expanded to 6 GB in capacity (Example 5-33).
Example 5-33 Verifying that the VDisk has been expanded
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002 id 14 name Kanaga0002 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 6.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000016 throttling 0
180
preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 6.00GB real_capacity 6.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize 6. AIX has not yet recognized a change in the capacity of the vpath1 volume, because no dynamic mechanism exists within the operating system to provide a configuration update communication. Therefore, to encourage AIX to recognize the extra capacity on the volume without stopping any applications, we use the chvg -g fc_source_vg command, where fc_source_vg is the name of the volumegroup which vpath1 belongs to. If AIX does not return anything, this means that the command was successful and the volume changes in this volume group have been saved. If AIX cannot see any changes in the volumes, it will return a message indicating this. 7. To verify that the size of vpath0 has changed, we use the lspv command again, as shown in Example 5-34.
Example 5-34 Verify that AIX can see the newly expanded VDisk
#lspv vpath1 PHYSICAL VOLUME: vpath1 VOLUME GROUP: PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 767 (6136 megabytes) VG DESCRIPTORS: FREE PPs: 128 (1024 megabytes) HOT SPARE: USED PPs: 639 (5112 megabytes) MAX REQUEST: FREE DISTRIBUTION: 00..00..00..00..128 USED DISTRIBUTION: 154..153..153..153..26
itsoaixvg
181
Here we can see that the volume now has a size of 6136 MB, equal to 6 GB. After this we can expand the file systems in this volumegroup to use the new capacity.
182
5.6.1 Configuring Windows 2000, Windows 2003, and Windows 2008 hosts
This section provides an overview of the requirements for attaching the SVC to a host running the Windows 2000 Server, Windows 2003 Server, or Windows 2008 Server. Before you attach the SVC to your host, make sure that all requirements listed below are fulfilled: For Windows Server 2003 x64 Edition operating system, you must install the Hotfix from KB 908980. If you do not install it before operation, preferred pathing is not available. You can find the Hotfix at: http://support.microsoft.com/kb/908980 Check LUN limitations for your host system. Ensure that there are enough Fibre Channel adapters installed in the server to handle the total LUNs you want to attach.
183
There you will also find the hardware list for supported host bus adapters and the driver levels for Windows. Check the supported firmware and driver level for your host bus adapter and follow the manufacturers instructions to upgrade the firmware and driver levels for each type of HBA. In most manufacturers driver readmes, you will find instructions for the Windows registry parameters that have to be set for the HBA driver. For the Emulex HBA driver, SDD requires the port driver, not the miniport port driver. For the QLogic HBA driver, SDDDSM requires the storport version of the miniport driver. For the QLogic HBA driver, SDD requires the scsiport version of the miniport driver.
184
e. Enable Target Reset: No Note: If you are using subsystem device driver (SDD) lower than 1.6, set Enable Target Reset to Yes. f. Login Retry Count: 30 g. Port Down Retry Count: 15 h. Link Down Timeout: 30 i. Extended error logging: Disabled (might be enabled for debugging) j. RIO Operation Mode: 0 k. Interrupt Delay Timer: 0 10.Press Esc to return to the Configuration Settings menu. 11.Press Esc. 12.From the Configuration settings modified window, select Save changes. 13.From the Fast!UTIL Options menu, select Select Host Adapter if more than one QLogic adapter was installed in your system. 14.Select the other Host Adapter and repeat all steps from point 4 to 12. 15.You have to repeat this for all installed Qlogic adapters in your system. When you are done press Esc to exit the Qlogic BIOS and restart the server.
185
See the following Web site for the latest information about SDD for Windows: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&lang=en Note: We recommend that you use SDD only on existing systems where you do not want to change from SDD to SDDDSM. New operating systems will only be supported with SDDDSM.
186
Before installing the SDD driver, the HBA driver has to be installed on your system. SDD requires the HBA SCSI port driver. After downloading the appropriate version of SDD from the Web site, extract the file and run setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-7) to install the driver.
After the setup has completed, answer Y again to reboot your system (Figure 5-8).
To check if your SDD installation is complete, open the Windows Device Manager, expand SCSI and RAID Controllers, right-click Subsystem Device Driver Management, and click Properties (see Figure 5-9).
187
The Subsystem Device Driver Management Properties window will appear. Select the Driver tab and make sure that you have installed the correct driver version (see Figure 5-10).
188
Automatic path failover protection Concurrent download of licensed internal code Path-selection policies for the host system No SDDDSM support for Windows 2000 For the HBA driver, SDDDSM requires the StorPort version of HBA miniport driver Table 5-3 shows, at the time of writing, the supported SDDDSM driver levels.
Table 5-3 Currently supported SDDDSM driver levels Windows operating system 2003 SP2(32bit) / 2003 SP2(x64) 2008 (32bit) / 2008 (x64) SDD level 2.2.0.0-11 2.2.0.0-11
To check which levels are available, go to the Web site: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&lang=en#WindowsSDDDSM To download SDDDSM, go to the Web site: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S40 00350&loc=en_US&cs=utf-8&lang=en The installation procedure for SDDDSM and SDD are the same, but remember that you have to use the StorPort HBA driver instead of the SCSI driver. The SDD installation is described in 5.6.6, SDD driver installation on Windows on page 186. After completing the installation, you will now see the Microsoft MPIO in device manager (Figure 5-11).
The SDDDSM installation for Windows 2008 is described in 5.8, Example configuration of attaching an SVC to a Windows 2008 host on page 199.
189
Figure 5-12 Windows 2003 host system before adding a new volume from SVC
We can check that the WWPN is logged into the SAN Volume Controller for the host Senegal by entering the following command (Example 5-35): svcinfo lshost Senegal
Example 5-35 Host info - Senegal
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Senegal id 1 name Senegal port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89B9C0 node_logged_in_count 2 state active WWPN 210000E08B89CCC2 node_logged_in_count 2 state active The configuration of the host Senegal, the VDisk ,Senegal_bas0001, and the mapping between the host and the VDisk are defined in the SAN Volume Controller, as described in Example 5-36 on page 191. In our example, the VDisk Senegal_bas0002 and Senegal_bas003 have the same configuration as VDisk Senegal_bas0001.
190
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 10.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize
191
We can also find the serial number of the VDisks by entering the following command (Example 5-37): svcinfo lsvdiskhostmap Senegal_bas0001
Example 5-37 VDisk serial number - Senegal_bas0001
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdiskhostmap Senegal_bas0001 id name SCSI_id host_id host_name wwpn vdisk_UID 7 Senegal_bas0001 0 1 Senegal 210000E08B89B9C0 6005076801A180E9080000000000000F 7 Senegal_bas0001 0 1 Senegal 210000E08B89CCC2 6005076801A180E9080000000000000F After installing the necessary drivers and the rescan disks operation completes, the new disks are found in the Computer Management window, as shown in Figure 5-13.
Figure 5-13 Windows 2003 host system with three new volumes from SVC
In Windows Device Manager, the disks are shown as IBM 2145 SCSI Disk Device (Figure 5-14 on page 193). The number of IBM 2145 SCSI Disk Devices that you see is equal to: (# of VDisks) x (# of paths per IO group per HBA) x (# of HBAs) The IBM 2145 Multi-Path Disk Devices are the devices created by the multipath driver (Figure 5-14 on page 193). The number of these devices are equal to the VDisks presented to the host.
192
When following the SAN zoning recommendation, this gives us, for one VDisk and a host with two HBAs: (# of VDisk) x (# of paths per IO group per HBA) x (# of HBAs) = 1 x 2 x 2 = 4 paths You can check if all paths are available if you select Start All Programs Subsystem Device Driver (DSM) Subsystem Device Driver (DSM). The SDD (DSM) command-line interface will appear. Enter the following command to see which paths are available to your system (Example 5-38):
Example 5-38 Datapath query device
Microsoft Windows [Version 5.2.3790] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 47 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 162 0
193
2 3
OPEN OPEN
NORMAL NORMAL
155 0
0 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 51 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 25 0 C:\Program Files\IBM\SDDDSM> Note: All Path States have to be OPEN. The path state can be OPEN or CLOSE. If one path is CLOSE, it means that the system is missing a path that it saw during start up. If you restart your system, the CLOSE paths are removed from this view.
194
To expand a volume in use on Windows 2000 and Windows 2003, we used Diskpart. The Diskpart tool is part of Windows 2003; for other Windows versions, you can download it free of charge from Microsoft. Diskpart is a tool developed by Microsoft to ease administration of storage. It is a command-line interface where you can manage disks, partitions, and volumes, by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting get more detailed information, create partitions, extend volumes, and more. For more information, see the Microsoft Web site: http://www.microsoft.com or http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech An example of how to expand a volume on a Windows 2003 host, where the volume is a VDisk from the SVC, is shown in the following discussion. To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. This gives this information for the Senegal_bas0001 before expanding the VDisk (Example 5-36 on page 191). Here we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find what vpath this VDisk is on the Windows 2003 host, we use the SDD command, datapath query device, on the Windows host (Figure 5-15). Here we can see that the Serial 6005076801A180E9080000000000000F of Disk1 on the Windows host (Figure 5-15) matches the vdisk ID of Senegal_bas0001 (Example 5-36 on page 191). To see the size of the volume on the Windows host, we use Disk Manager, as shown in Figure 5-15.
195
This shows that the volume size is 10 GB. To expand the volume on the SVC, we use the command svctask expandvdisksize to increase the capacity on the VDisk. In this example, we expand the VDisk by 1 GB (Example 5-39).
Example 5-39 svctask expandvdisksize command
IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 11.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 11.00GB real_capacity 11.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To check that the VDisk has been expanded, we use the command svctask expandvdisksize. In Example 5-39, we can see that the VDisk Senegal_bas0001 has been expanded to 11 GB in capacity. 196
SAN Volume Controller V5.1
After performing a Disk Rescan in Windows, you will see the new unallocated space in Windows Disk Management, as shown in Figure 5-16.
This shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity available for the file system, use the following commands, as shown in Example 5-40. diskpart list volume select volume detail volume extend Starts DiskPart in a DOS prompt. Shows you all available volumes. Selects the volume to expand. Displays details for the selected volume, including the unallocated capacity. Extends the volume to the available unallocated space.
C:\>diskpart Microsoft DiskPart version 5.2.3790.3959 Copyright (C) 1999-2001 Microsoft Corporation. On computer: SENEGAL DISKPART> list volume Volume ### ---------Volume 0 Volume 1 Volume 2 Ltr --C S D Label ----------SVC_Senegal Fs ----NTFS NTFS Type ---------Partition Partition DVD-ROM Size ------75 GB 10 GB 0 B Status --------Healthy Healthy Healthy Info -------System
DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> detail volume
197
Status ---------Online
Size ------11 GB
Free ------1020 MB
Dyn ---
Gpt ---
Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No DISKPART> extend DiskPart successfully extended the volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------0 B Dyn --Gpt ---
Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No After extending the volume, the command detail volume shows that there is no free capacity on the volume anymore. The list volume command shows the file system size. The disk management window also shows the new disk size, as shown in Figure 5-17.
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded by expanding the underlying SVC VDisk. The new space will appear as unallocated space at the end of the disk.
198
In this case, you do not need to use the DiskPart Tool, just Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases. Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data, due to a different position of the LBA on the disks.
199
5. Right-click the HBA and select Update driver Software. (Figure 5-18).
7. Enter the path to the extracted QLogic driver and click Next (Figure 5-20 on page 201).
200
201
9. When the driver update is complete, click Close to exit the wizard (Figure 5-22).
5. After the SDDDSM Setup is finished, type Y and press Enter to restart your system. After the reboot, the SDDDSM installation is complete. You can check this in Device Manager, as the SDDDSM device will appear (Figure 5-24 on page 203), and the SDDDSM tools will have been installed (Figure 5-25 on page 203).
202
203
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede id name SCSI_id vdisk_id vdisk_name wwpn 0 Diomede 0 20 Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B 0 Diomede 1 21 Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C 0 Diomede 2 22 Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D
vdisk_UID
Perform the following steps to use the devices on your Windows 2008 host: 1. Click Start and Run. 2. Enter diskmgmt.msc and click OK and the Disk Management window will appear. 3. Select Action and click Rescan Disks (Figure 5-26).
4. The SVC disks will now appear in the Disk Management window (Figure 5-27 on page 205).
204
After you have assigned the SVC disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 5-28).
205
5. To check that the disks are available, select Start All Programs Subsystem Device Driver DSM and click Subsystem Device Driver DSM. (Figure 5-29). The SDDDSM Command Line Utility will appear.
6. Enter datapath query device and press Enter (Example 5-42). This command will display all disks and the available paths, including their state.
Example 5-42 Windows 2008 SDDDSM command-line utility
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002B ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002C ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 206
SAN Volume Controller V5.1
2 3
OPEN OPEN
NORMAL NORMAL
0 1517
0 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002D ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 C:\Program Files\IBM\SDDDSM> Note: When following the SAN zoning recommendation, this gives us, using one VDisk and a host with two HBAs, (# of VDisk) x (# of paths per IO group per HBA) x (# of HBAs) = 1 x 2 x 2 = four paths. 7. Right-click the disk in Disk Management and select Online to place the disk online (Figure 5-30).
8. Repeat step 7 for all of your attached SVC disks. 9. Right-click one disk again and select Initialize Disk (Figure 5-31).
207
10.Mark all the disks you want to initialize and click OK (Figure 5-32).
11.Right-click the unallocated disk space and select New Simple Volume (Figure 5-33).
12.The New Simple Volume Wizard appears. Click Next. 13.Enter a disk size and click Next (Figure 5-34).
14.Assign a drive letter and click Next (Figure 5-35 on page 209). 208
SAN Volume Controller V5.1
209
16.Click Finish and repeat this step for every SVC disk on your host system (Figure 5-37).
210
Figure 5-15 on page 195 shows the Disk Manager before removing the disk. We will remove Disk 1(S:). To find the correct VDisk information, we find the Serial/UID number using SDD (Example 5-43).
Example 5-43 Removing SVC disk from Windows server
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0
211
Knowing the Serial/UID of the VDisk and the host name Senegal, we find the VDisk mapping to remove using the lshostvdiskmap command on the SVC, and after this we remove the actual VDisk mapping (Example 5-44).
Example 5-44 Finding and removing the VDisk mapping
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
vdisk_UID
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
vdisk_UID
Here we can see that the VDisk is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been removed, as shown in Figure 5-38.
SDD also shows us that the status for all paths to Disk1 has changed to CLOSE because the disk is not available (Example 5-45 on page 213).
212
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0 The disk (Disk1) is now removed from the server. However, to remove the SDD information of the disk, we need to reboot the server, but this can wait until a more suitable time.
213
More information about the CLI is covered in Chapter 7, SVC operations using the CLI on page 337.
214
5.10.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software on the Windows operating system: SAN Volume Controller and Master Console Version 2.1.0 or later with FlashCopy enabled. You must install the SAN Volume Controller Console before you install the IBM System Storage Hardware provider. IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software Version 3.1 or later.
215
4. The Welcome window opens, as shown in Figure 5-39. Click Next to continue with the installation. You can click Cancel at any time to exit the installation. To move back to previous windows while using the wizard, click Back.
Figure 5-39 IBM System Storage Support for Microsoft Volume Shadow Copy installation
5. The License Agreement window opens (Figure 5-40). Read the license agreement information. Select whether you accept the terms of the license agreement, and click Next. If you do not accept, it means that you cannot continue with the installation.
Figure 5-40 IBM System Storage Support for Microsoft Volume Shadow Copy installation
216
6. The Choose Destination Location window opens (Figure 5-41). Click Next to accept the default directory where the setup program will install the files, or click Change to select a different directory. Click Next.
Figure 5-41 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Figure 5-42 IBM System Storage Support for Microsoft Volume Shadow Copy installation
217
8. From the next window, select the required CIM server, or select Enter the CIM Server address manually, and click Next (Figure 5-43).
Figure 5-43 IBM System Storage Support for Microsoft Volume Shadow Copy installation
9. The Enter CIM Server Details window appears. Enter the following information in the fields (Figure 5-44): a. In the CIM Server Address field, type the name of the server where the SAN Volume Controller Console is installed. b. In the CIM User field, type the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the server where the SAN Volume Controller Console is installed. c. In the CIM Password field, type the password for the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the SAN Volume Controller Console d. Click Next.
Figure 5-44 IBM System Storage Support for Microsoft Volume Shadow Copy installation
10.In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to restart the system (Figure 5-45 on page 219).
218
Figure 5-45 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Note: If these settings change after installation, you can use the ibmvcfg.exe tool to update Microsoft Volume Shadow Copy and Virtual Disk Services software with the new settings. If you do not have the CIM agent server, port, or user information, contact your CIM agent administrator.
219
This command ensures that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider (Example 5-46).
Example 5-46 Microsoft Software Shadow copy provider
C:\Documents and Settings\Administrator>vssadmin list providers vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001 Microsoft Corp. Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7 Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware Provider' Provider type: Hardware Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b} Version: 3.1.0.1108 If you are able to successfully perform all of these verification tasks, the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was successfully installed on the Windows server.
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000 -force Host, id [2], successfully created 2. Create a virtual host for the reserved pool of volumes. You can use the default name VSS_RESERVED or specify a different name. Associate the host with the WWPN 5000000000000001 (14 zeroes) (Example 5-48 on page 221).
220
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn 5000000000000001 -force Host, id [3], successfully created 3. Map the logical units (VDisks) to the free pool of volumes. The VDisks cannot be mapped to any other hosts. If you already have VDisks created for the free pool of volumes, you must assign the VDisks to the free pool. 4. Create VDisk-to-host mappings between the VDisks selected in step 3 and the VSS_FREE host to add the VDisks to the free pool. Alternatively, you can use the ibmvcfg add command to add VDisks to the free pool (Example 5-49).
Example 5-49 Host mappings
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002 Virtual Disk to Host map, id [1], successfully created 5. Verify that the VDisks have been mapped. If you do not use the default WWPNs 5000000000000000 and 5000000000000001, you must configure the IBM System Storage hardware provider with the WWPNs (Example 5-50).
Example 5-50 Verify hosts
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E90800000000000012 2 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013
C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe IBM System Storage VSS Provider Configuration Tool Commands ---------------------------------------ibmvcfg.exe <command> <command arguments> Commands: /h | /help | -? | /? showcfg listvols <all|free|unassigned> add <volume esrial number list> (separated by spaces) rem <volume serial number list> (separated by spaces) Configuration: set user <CIMOM user name>
Chapter 5. Host configuration
221
set set set set set set set set set set set set set
password <CIMOM password> trace [0-7] trustpassword <trustpassword> truststore <truststore location> usingSSL <YES | NO> vssFreeInitiator <WWPN> vssReservedInitiator <WWPN> FlashCopyVer <1 | 2> (only applies to ESS) cimomPort <PORTNUM> cimomHost <Hostname> namespace <Namespace> targetSVC <svc_cluster_ip> backgroundCopy <0-100>
ibmvcfg showcfg ibmvcfg set username <username> ibmvcfg set password <password> ibmvcfg set targetSVC <ipaddress>
222
Command
Description Specifies the WWPN of the host. The default value is 5000000000000000. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000000. Specifies the WWPN of the host. The default value is 5000000000000001. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000001. Lists all virtual disks (VDisks), including information about size, location, and VDisk to host mappings. Lists all VDisks, including information about size, location, and VDisk to host mappings. Lists the volumes that are currently in the free pool. Lists the volumes that are currently not mapped to any hosts. Adds one or more volumes to the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the VDisks are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command. Removes one or more volumes from the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the VDisks are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.
Example
ibmvcfg listvols
ibmvcfg listvols
ibmvcfg listvols free ibmvcfg listvols unassigned ibmvcfg add vdisk12 ibmvcfg add 600507 68018700035000000 0000000BA -s 66.150.210.141
223
224
Often, the automatic update process also upgrades the system to the latest kernel level. If this is the case, hosts running SDD should consider turning off the automatic update of kernel levels. Some drivers supplied by IBM, such as SDD, are dependent on a specific kernel and will cease to function on a new kernel. Similarly, host bus adapter (HBA) drivers need to be compiled against specific kernels in order to function optimally. By allowing automatic updates of the kernel, you risk impacting your host systems unexpectedly.
Installing SDD
This section describes how to install SDD for older distributions. Before performing these steps, always check for the currently supported levels, as described in 5.11.2, Configuration information on page 224.
225
The cat /proc/scsi/scsi command in Example 5-52 shows the devices that the SCSI driver has probed. In our configuration, we have two HBAs installed in our server and we configured the zoning in order to access our VDisk from four paths.
Example 5-52 cat /proc/scsi/scsi command example
[root@diomede sdd]# cat /proc/scsi/scsi Attached devices: Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown [root@diomede sdd]#
Rev: 0000 ANSI SCSI revision: 04 Rev: 0000 ANSI SCSI revision: 04
The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown in Example 5-53.
Example 5-53 rpm command example
[root@Palau sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm Preparing... ########################################### [100%] 1:IBMsdd ########################################### [100%] Added following line to /etc/inittab: srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 [root@Palau sdd]# To manually load and configure SDD on Linux, use the service sdd start command (SUSE Linux users can use the sdd start command). If you are not running a supported kernel, you will get an error message. If your kernel is supported, you should see an OK success message, as shown in Example 5-54.
Example 5-54 Non-supported kernel for SDD
[root@Palau sdd]# sdd start Starting IBMsdd driver load: [ Issuing killall sddsrv to trigger respawn... Starting IBMsdd configuration: [ OK OK ] ]
Issue the cfgvpath query command to view the name and serial number of the VDisk configured in the SAN Volume Controller, as shown in Example 5-55.
Example 5-55 cfgvpath query example
[root@Palau ~]# cfgvpath query RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sda df_ctlr=0 /dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30
226
RTPG succeeded: sd_name=/dev/sdb df_ctlr=0 /dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdc df_ctlr=0 /dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdd df_ctlr=0 /dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 [root@Palau ~]# The cfgvpath command configures the SDD vpath devices, as shown in Example 5-56.
Example 5-56 cfgvpath command example
[root@Palau ~]# cfgvpath c--------- 1 root root 253, 0 Jun 5 WARNING: vpatha path sda has WARNING: vpatha path sdb has WARNING: vpatha path sdc has WARNING: vpatha path sdd has Writing out new configuration to file [root@Palau ~]#
09:04 /dev/IBMsdd already been configured. already been configured. already been configured. already been configured. /etc/vpath.conf
The configuration information is saved by default in the file /etc/vpath.conf. You can save the configuration information to a specified file name by entering the following command: cfgvpath -f file_name.cfg Issue the chkconfig command to enable SDD to run at system startup: chkconfig sdd on To verify the setting, enter the following command: chkconfig --list sdd This is shown in Example 5-57.
Example 5-57 sdd run level example
[root@Palau sdd]# chkconfig --list sdd sdd 0:off 1:off 2:on [root@Palau sdd]#
3:on
4:on
5:on
6:off
If necessary, you can disable the startup option by entering: chkconfig sdd off
227
Run the datapath query commands to display the online adapters and paths to the adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and 2. Paths 1 and 3 connect to the other node and are used as alternate or backup paths for high availability, as shown in Example 5-58.
Example 5-58 datapath query command example
[root@Palau ~]# datapath query adapter Active Adapters :2 Adpt# Name State Mode 0 Host0Channel0 NORMAL ACTIVE 1 Host1Channel0 NORMAL ACTIVE [root@Palau ~]# [root@Palau ~]# datapath query device Total Devices : 1 Select 1 0 Errors 0 0 Paths 2 2 Active 0 0
DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda CLOSE NORMAL 1 0 1 Host0Channel0/sdb CLOSE NORMAL 0 0 2 Host1Channel0/sdc CLOSE NORMAL 0 0 3 Host1Channel0/sdd CLOSE NORMAL 0 0 [root@Palau ~]# SDD has three different path-selection policy algorithms. Failover only (fo): All I/O operations for the device are sent to the same (preferred) path unless the path fails because of I/O errors. Then an alternate path is chosen for subsequent I/O operations. Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at random from those paths. Load-balancing mode also incorporates failover protection. The load-balancing policy is also known as the optimized policy. Round robin (rr): The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two. You can dynamically change the SDD path-selection policy algorithm by using the SDD command datapath set device policy. You can see the SDD path-selection policy algorithm that is active on the device when you use the datapath query device command. Example 5-58 shows that the active policy is optimized, which means that the SDD path-selection policy algorithm active is Optimized Sequential.
228
Example 5-59 shows the VDisk information from the SVC command-line interface.
Example 5-59 svcinfo redhat1
IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2 id 6 name linux2 port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 2 state active WWPN 210000E08B054CAA node_logged_in_count 2 state active IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2 id name SCSI_id vdisk_id wwpn vdisk_UID 6 linux2 0 33 210000E08B89C1CD 60050768018201BEE000000000000035 IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1 id 33 name linux_vd1 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG0 capacity 1.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018201BEE000000000000035 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 IBM_2145:ITSOSVC42A:admin>
vdisk_name linux_vd1
229
[root@Palau ~]# fdisk /dev/vpatha Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-1011, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): Using default value 1011 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@Palau ~]#
230
[root@Palau ~]# mkfs -t ext3 /dev/vpatha mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Palau ~]# 3. Create the mount point and mount the vpath drive, as shown in Example 5-62.
Example 5-62 Mount point
[root@Palau ~]# mkdir /itsosvc [root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc 4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc and the datapath query command shows that four paths are available (Example 5-63).
Example 5-63 Display mounted drives
[root@Palau ~]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 74699952 /dev/hda1 101086 none 1033136 /dev/vpatha 1032088 [root@Palau ~]#
Used Available Use% Mounted on 2564388 13472 0 34092 68341032 82395 1033136 945568 4% 15% 0% 4% / /boot /dev/shm /itsosvc
DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================
231
Errors 0 0 0 0
232
~]# modprobe dm-round-robin ~]# multipathd start ~]# chkconfig multipathd on ~]#
5. Open the multipath.conf file and follow the instructions to enable multipathing for IBM devices. The file is located in the /etc directory. Example 5-65 shows editing using vi.
Example 5-65 Editing the multipath.conf file
[root@palau etc]# vi multipath.conf 6. Add the following entry to the multipath.conf file: device { vendor "IBM" product "2145" path_grouping_policy group_by_prio prio_callout "/sbin/mpath_prio_alua /dev/%n" } 7. Restart the multipath daemon (Example 5-66).
Example 5-66 Stopping and starting the multipath daemon
[root@palau ~]# service multipathd stop Stopping multipathd daemon: [root@palau ~]# service multipathd start Starting multipathd daemon:
[ [
OK OK
] ]
8. Type the command multipath -dl to see the mpio configuration. You should see two groups with two paths each. All paths should have the state [active][ready] and one group should be [enabled].
233
9. Use fdisk to create a partition on the SVC disk, as shown in Example 5-67.
Example 5-67 fdisk
[root@palau scsi]# fdisk -l Disk /dev/hda: 80.0 GB, 80032038912 bytes 255 heads, 63 sectors/track, 9730 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/hda1 * /dev/hda2 Start 1 14 End 13 9730 Blocks 104391 78051802+ Id 83 8e System Linux Linux LVM
Disk /dev/sda: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sde doesn't contain a valid partition table Disk /dev/sdf: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdf doesn't contain a valid partition table Disk /dev/sdg: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdg doesn't contain a valid partition table
234
Disk /dev/sdh: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdh doesn't contain a valid partition table Disk /dev/dm-2: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-3 doesn't contain a valid partition table [root@palau scsi]# fdisk /dev/dm-2 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-516, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-516, default 516): Using default value 516 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. [root@palau scsi]# shutdown -r now
235
[root@palau ~]# mkfs -t ext3 /dev/dm-2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 518144 inodes, 1036288 blocks 51814 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1061158912 32 block groups 32768 blocks per group, 32768 fragments per group 16192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@palau ~]# 11.Create a mount point and mount the drive, as shown in Example 5-69.
Example 5-69 Mount point
[root@palau ~]# mkdir /svcdisk_0 [root@palau ~]# cd /svcdisk_0/ [root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0 [root@palau svcdisk_0]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73608360 1970000 67838912 3% / /dev/hda1 101086 15082 80785 16% /boot tmpfs 967984 0 967984 0% /dev/shm /dev/dm-2 4080064 73696 3799112 2% /svcdisk_0
236
237
In IBM System x servers, the HBA should always be installed in the first slots. This means that if you install, for example, two HBAs and two network cards, the HBAs should be installed in slot 1 and slot 2 and the network cards can be installed in the remaining slots. For older ESX versions, you will find the supported HBAs at the IBM Web Site: http://www.ibm.com/storage/support/2145 The interoperability matrix for ESX V3.02, V3.5, and V3.51 are available at the VMware Web Site (clicking this link opens or downloads the PDF): V3.02 http://www.vmware.com/pdf/vi3_io_guide.pdf V3.5 http://www.vmware.com/pdf/vi35_io_guide.pdf The supported HBA device drivers are already included in the ESX server build. After installing, load the default configuration of your FC HBAs. We recommend using the same model of HBA with the same firmware in one server. It is not supported to have Emulex and QLogic HBAs that access the same target in one server.
238
If you are not familiar with the VMware environments and the advantages of storing virtual machines and application data on a SAN, we recommend that you get an overview about the VMware products before continuing with the section below. VMware documentation is available at: http://www.vmware.com/support/pubs/
This means that theoretically you are able to run all your virtual machines on one LUN, but for performance reasons, in more complex scenarios, it can be better to load balance virtual machines over separate HBAs, storages, or arrays. For example, if you run an ESX host, with several virtual machines, it would make sense to use one slow array, for example, for Print and Active Directory Services guest operating systems without high I/O, and another fast array for database guest operating systems.
239
Using fewer VDisks does have the following advantages: More flexibility to create virtual machines without creating new space on the SVC More possibilities for taking VMware snapshots Fewer VDisks to manage Using more and smaller VDisks can have the following advantages: Different I/O characteristics of the guest operating systems More flexibility (the multipathing policy and disk shares are set per VDisk) Microsoft Cluster Service requires a own VDisk for each cluster disk resource More documentation about designing your VMware infrastructure is provided at: http://www.vmware.com/vmtn/resources/ or: http://www.vmware.com/resources/techresources/1059 Note: ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone. You can have only one VMFS volume per VDisk.
240
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 2 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active Then we have to set the SCSI Controller Type in VMware. By default, ESX Server disables the SCSI bus sharing, and does not allow multiple virtual machines to access the same VMFS file at the same time (Figure 5-47 on page 242). But in many configurations, such as those for high availability, the virtual machines have to share the same VMFS file to share a disk. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select Edit settings. Highlight the SCSI Controller, and select one of the three available settings, depending on your configuration: None: Disks cannot be shared by other virtual machines. Virtual: Disks can be shared by virtual machines on the same server. Physical: Disks can be shared by virtual machines on any server. Click OK to apply the setting.
241
Create your VDisks on the SVC and map them to the ESX hosts Note: If you want to use features such as VMotion, the VDisks that own the VMFS file have to be visible to every ESX host that should be able to host the virtual machine. In SVC, this can be achieved by selecting the Allow the virtual disks to be mapped even if they are already mapped to a host check box. The VDisk has to have the same SCSI ID on each ESX host. For this example configuration, we have created one VDisk and have mapped it to our ESX host, as shown in Example 5-72.
Example 5-72 Mapped VDisk to ESX host Nile
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile id name SCSI_id vdisk_id vdisk_name wwpn 1 Nile 0 12 VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010
vdisk_UID
ESX does not automatically scan for SAN changes (except when rebooting the whole ESX server). If you have made any changes to your SVC or SAN configuration, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host. 3. In the Hardware window, choose Storage Adapters. 4. Click Rescan.
242
To configure a storage device to use it in VMware, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host for which you want to see the assigned VDisks and open the Configuration tab. 3. In the Hardware window on the left side, click Storage. 4. To create a new storage pool, select Click here to create a datastore or Add storage if the yellow field does not appear (Figure 5-48).
5. The Add storage wizard will appear. 6. Select create Disk/Lun and click Next. 7. Select the SVC VDisk you want to use for the datastore and click Next. 8. Review the Disk Layout and click Next. 9. Enter a datastore name and click Next. 10.Select a Block Size and enter the size of the new partition, then click Next. 11.Review your selections and click Finish. Now the created VMFS datastore appears in the Storage window (Figure 5-49). You will see the details for the highlighted datastore. Check if all the paths are available, and that the Path Selection is set to Most Recently Used.
If not all paths are available, check your SAN and storage configuration. After fixing the problem, select Refresh to perform a path rescan. The view will be updated to the new configuration.
243
The recommended Multipath Policy for SVC is Most Recently Used. If you have to edit this policy, perform the following steps: 1. Highlight the datastore. 2. Click Properties. 3. Click Managed Paths. 4. Click Change (see Figure 5-50). 5. Select Most Recently Used. 6. Click OK. 7. Click Close. Now your VMFS datastore has been created and you can start using it for your guest operating systems.
244
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 60.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name
Chapter 5. Host configuration
245
fast_write_state empty used_capacity 60.00GB real_capacity 60.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 65.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 65.00GB real_capacity 65.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>
246
2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage Adapters. 6. Click Rescan. 7. Make sure that the Scan for new Storage Devices check box is marked and click OK. After the scan has completed, the new capacity is displayed in the Details section. 8. Click Storage. 9. Right-click the VMFS volume and click Properties. 10.Click Add Extend. 11.Select the new free space and click Next. 12.Click Next. 13.Click Finish. The VMFS volume has now been extended and the new space is ready for use.
247
OS Cluster Support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/ 5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of writing.
248
249
Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an Inquiry command for any page is sent to LUN 0 using Peripheral Device Addressing, it is reported as Peripheral Device Type 0Ch (controller). When any command other than an inquiry is sent to LUN 0 using Peripheral Device Addressing, SVC will respond as an unmapped LUN 0 would normally respond. When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown Device Type. When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh (unknown or no device type). This is in contrast to the behavior for generic hosts, where peripheral Device Type 00h is returned.
It is also possible to configure the multipath driver so that it offers a Web interface to run the commands. Before this can work, we need to configure the Web interface. Sddsrv does not bind to any TCP/IP port by default, but allows port binding to be dynamically enabled or disabled. For all platforms except Linux, the multipath driver package ships a template file of sddsrv.conf that is named sample_sddsrv.conf. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the sample_sddsrv.conf file is in the directory where SDD is installed. You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as sample_sddsrv.conf by simply copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying the parameters in sddsrv.conf. and changing the values of Enableport,Loopbackbind to True. Figure 5-51 shows the start window of the multipath driver Web interface.
251
5.17.1 IBM Redbook publications containing SVC storage subsystem attachment guidelines
It is beyond the intended scope of this redbook to describe the attachment to each and every subsystem that the SVC supports. Here is a short list of what we found especially useful in the writing of this redbook, and in the field: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, In this book it is described in details how you can tune your back end storage to maximize your performance on the SVC. http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf In Chapter 14 in DS8000 Performance Monitoring and Tuning, SG24-7146, This chapter describes the guidelines and procedures to make the most of the performance available from your DS8000 storage subsystem when attached to the IBM SAN Volume Controller.http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf In the DS4000 Best Practices and Performance Tuning Guide, SG24-6363 it is defined in details how to connect and configure your storage for optimized performance on the SVC. http://www.redbooks.ibm.com/redbooks/pdfs/sg2476363.pdf In IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659 it discusses specific considerations for attaching the XIV Storage System to a SAN Volume Controller http://www.redbooks.ibm.com/redpieces/pdfs/sg247659.pdf
252
Chapter 6.
253
6.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides the capability to perform a point-in-time (PiT) copy of one or more VDisks. In the topics that follow, we describe how FlashCopy works on the SVC, and we present examples of how to configure and utilize FlashCopy. FlashCopy is also known as point-in-time copy (PiT). This technique is used to help solve the problem where it is difficult to make a consistent copy of a data set that is being constantly updated. The FlashCopy source is frozen for a few seconds or less during the PiT copy process. It will be able to accept I/O when the PiT copy bitmap is set up and the FlashCopy function is ready to intercept read/write requests in the IO path. Although the background copy operation takes some time, the resulting data at the target appears as though the copy were made instantaneously. SVCs FlashCopy service provides the capability to perform a PiT copy of one or more VDisks. Since it is performed at the block level it operates underneath the operating system and application caches. The image that is presented is 'crash-consistent': that is to say it is similar to one that would be seen in a crash event, such as an unexpected power failure.
6.1.3 Backup
FlashCopy does not affect your backup time, but it allows you to create a PiT consistent data set (across VDisks), with a minimum of downtime for your source host. The FlashCopy target can then be mounted on a different host (or the backup server) and backed up. Using this
254
procedure, the backup speed becomes less important, since the backup time does not require downtime for the host dependent on the source VDisks.
6.1.4 Restore
You can keep periodically created FlashCopy targets online, to provide very fast restore of specific files from the PiT consistent data set revealed on the FlashCopy targets, which simply can be copied to the source VDisk in case a restore is needed.
255
Cascaded FlashCopy the target VDisk of a FlashCopy mapping can itself be the source VDisk in a further FlashCopy mapping
256
Whether or not the initial FlashCopy map (VDisk X -> VDisk Y) is incremental, the reverse operation will only copy modified data. Consistency groups are reversed by creating a set of new reverse FC maps and adding them to a new reverse consistency group. A consistency group cannot contain more than one FC map with the same target VDisk.
257
FlashCopy Manager (FCM) provides many of the features of TSM for Advanced Copy Services without the requirement to use TSM. With Tivoli FlashCopy Manager it will be possible to co-ordinate and automate host preparation steps before issuing FlashCopy start commands to ensure a consistent backup of the application is taken. Databases can be put into hot backup mode, and before starting FlashCopy you would flush the filesystem cache. FCM also allows for easier management of on-disk backups using FlashCopy, and provides a simple interface to the reverse operation. Figure 6-3 shows in brief the FCM feature.
258
259
260
Figure 6-6 shows four targets and mappings taken from a single source. It also shows that there is an ordering to the targets: Target 1 is the oldest (as measured from the time it was started), through to Target 4, which is the newest. The ordering is important because of the way in which data is copied when multiple target VDisks are defined and because of the dependency chain that results. A write to source VDisk does not cause its data to be copied to all the targets; instead, it is copied to the newest target VDisk only (Target 4 above). The older targets will refer to new targets first before referring to the source. From the point of view of an intermediate target disk (either the oldest or the newest), it treats the set of newer target VDisks and the true source VDisk as a type of composite source. It treats all older VDisks as a kind of target (and behaves like a source to them). If the mapping for an intermediate target VDisk shows 100% progress, then its target VDisk contains a complete set of data. In this case, mappings treat the set of newer target VDisks, up to and including the 100% progress target, as a form of composite source. A dependency relationship exists between a particular target and all newer targets (up to and including a target that shows 100% progress) that share the same source until all data has been copied to this target and all older targets. More information about Multiple Target FlashCopy (MTFC) can be found in 6.4.6, Interaction and dependency between MTFC on page 265.
261
Dependent writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple VDisks, consider the following typical sequence of writes for a database update transaction: 1. A write is executed to update the database log, indicating that a database update is to be performed. 2. A second write is executed to update the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next. However, if the database log (updates 1 and 3) and the database itself (update 2) are on different VDisks and a FlashCopy mapping is started during this update, then you need to exclude the possibility that the database itself is copied slightly before the database log resulting in the target VDisks seeing writes (1) and (3) but not (2), since the database was copied before the write was completed. In this case, if the database was restarted using the backup made from the FlashCopy target disks, the database log would indicate that the transaction had completed successfully when, in fact, that is not the case, because the FlashCopy of the VDisk with the database file was started (bitmap was created) before the write was on the disk. Therefore, the transaction is lost and the integrity of the database is in question. 262
SAN Volume Controller V5.1
To overcome the issue of dependent writes across VDisks and create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an atomic operation. To achieve this condition, the SVC supports the concept of consistency groups. A FlashCopy consistency group can contain up to 512 FlashCopy mappings (up to the maximum number of FlashCopy mappings supported by the SVC Cluster). FlashCopy commands can then be issued to the FlashCopy consistency group and thereby simultaneously for all FlashCopy mappings defined in the consistency group. For example, when issuing a FlashCopy start command to the consistency group, all of the FlashCopy mappings in the consistency group are started at the same time, resulting in a PiT copy that is consistent across all of the FlashCopy mappings that are contained in the consistency group.
Maximum configurations
Table 6-1 shows the FlashCopy properties and maximum configurations.
Table 6-1 FlashCopy properties and maximum configuration. FlashCopy property FC target per source FC mapping per cluster Maximum 256 4096 Comment The maximum number of FC mappings that can exist with the same source VDisk. The number of mappings is no longer limited by the number of VDisks in the cluster and so the FC component limit applies. An arbitrary limit policed by the software. This is a limit on the quantity of FlashCopy Mappings using bitmap space from this IO Group. This maximum configuration will consume all 512MB of bitmap space for the IO Group and allow no Metro and Global Mirror bitmap space. The default is 40TB. Due to the time taken to prepare a consistency group with a large number of mappings.
127 1024
512
263
To illustrate how the FlashCopy indirection layer works, we look at what happens when a FlashCopy mapping is prepared and subsequently started. When a FlashCopy mapping is prepared and started, the following sequence is applied: 1. Flush write data in cache onto a source VDisk or VDisks that are part of a consistency group. 2. Put cache into write-through on the source VDisk(s). 3. Discard cache for the target VDisk(s). 4. Establish a sync point on all source VDisks in the consistency group (creating the FlashCopy bitmap). 5. Ensure that the indirection layer governs all I/O to source and target VDisks. 6. Enable cache on both the source and target VDisks. FlashCopy provides the semantics of a PiT copy, using the indirection layer, which intercepts I/Os targeted at either the source or target VDisks. The act of starting a FlashCopy mapping causes this indirection layer to become active in the I/O path. This occurs as an atomic command across all FlashCopy mappings in the consistency group. The indirection layer makes a decision about each I/O. This decision is based upon: The VDisk and logical block number (LBA) to which the I/O is addressed Its direction (read or write) The state of an internal data structure, the FlashCopy bitmap The indirection layer either allows the I/O to go through the underlying storage, redirects the I/O from the target VDisk to the source VDisk, or stalls the I/O while it arranges for data to be copied from the source VDisk to the target VDisk. To explain in more detail which action is applied for each I/O, we first look at the FlashCopy bitmap.
Source Reads
Reads of the source are always passed through to the underlying source disk.
Target Reads
In order for FlashCopy to process a read from the target disk it must consult its bitmap. If the data being read has already been copied to the target then the read is sent to the target disk. If it has not then the read is sent to the source Virtual Disk or possibly to another target Virtual Disk if multiple FlashCopy mappings exist for the source Virtual Disk. Clearly this algorithm requires that, whilst this read is outstanding no writes are allowed to execute which would change the data being read. The SVC satisfies this requirement using by a cluster wide locking scheme.
case the new Grain contents are written to the target Virtual Disk and if this succeeds then the Grain is marked as split in the FlashCopy Bitmap without a copy from the source to the target having been performed. If the write fails then the grain is not marked as split. The rate at which the grains are copied across from the source VDisk to the target VDisk is called the copy rate. By default, the copy rate is 50, although this can be altered. For more information about copy rates, see 6.4.13, Space-efficient FlashCopy on page 274.
265
Target 0 is not dependent on a source because it has completed copying. Target 0 has two dependent mappings (Target 1 and Target 2). Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been copied. Target 2 is dependent on it since Target 2 is 20% copy complete. Once all of Target 1 has been copied, it can then move to the idle_copied state. Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target 2 has been copied. No target is dependent on Target 2, so when all of it has been copied to Target 2, it can move to the idle_copied state. Target 3 has actually completed copying, so it is not dependent on any other maps.
266
Yes Target No
Read from source VDisk. If any newer targets exist for this source in which this grain has already been copied, then read from the oldest of these. Otherwise, read from the source.
Yes
267
This means that the copy latency is typically only seen when de-staged from the cache, rather than for write operations from an application; otherwise, the copy operation may be blocked waiting for the write to complete. In Figure 6-10, we illustrate the logical placement of the FlashCopy indirection layer.
268
id 8 name Image_VDisk_A IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name MDG_Image capacity 36.0GB type image . . . autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -size 36 -unit gb -name VDisk_A_copy -mdiskgrp MDG_DS47 -vtype striped -iogrp 1 Virtual Disk, id [19], successfully created Tip: Alternatively, the expandvdisksize and shrinkvdisksize VDisk commands can be used to modify the size of the commands support specification of the size in bytes. See 7.4.10, Expanding a VDisk on page 365 and 7.4.16, Shrinking a VDisk on page 369 for more information. An image mode VDisk can be used as either a FlashCopy source or target VDisk.
269
Figure 6-11 FlashCopy mapping state diagram. Table 6-3 Mapping events Mapping event Create Description A new FlashCopy mapping is created between the specified source virtual disk (VDisk) and the specified target VDisk. The operation fails if any of the following is true: For SAN Volume Controller software Version 4.1.0 or earlier, the source or target VDisk is already a member of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source or target VDisk is already a target VDisk of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source VDisk is already a member of 16 FlashCopy mappings. For SAN Volume Controller software Version 4.3.0 or later, the source VDisk is already a member of 256 FlashCopy mappings. The node has insufficient bitmap memory. The source and target VDisks are different sizes. The prestartfcmap or prestartfcconsistgrp command is directed to either a consistency group for FlashCopy mappings that are members of a normal consistency group or to the mapping name for FlashCopy mappings that are stand-alone mappings. The prestartfcmap or prestartfcconsistgrp command places the FlashCopy mapping into the preparing state. Attention: The prestartfcmap or prestartfcconsistgrp command can corrupt any data that previously resided on the target VDisk because cached writes are discarded. Even if the FlashCopy mapping is never started, the data from the target might have logically changed during the act of preparing to start the FlashCopy mapping.
Prepare
270
Description The FlashCopy mapping automatically moves from the preparing state to the prepared state after all cached data for the source is flushed and all cached data for the target is no longer valid. When all the FlashCopy mappings in a consistency group are in the prepared state, the FlashCopy mappings can be started. To preserve the cross volume consistency group, the start of all of the FlashCopy mappings in the consistency group must be synchronized correctly with respect to I/Os that are directed at the VDisks. This is achieved with the startfcmap or startfcconsistgrp command. The following occurs during the startfcmap or startfcconsistgrp commands run: New reads and writes to all source VDisks in the consistency group are paused in the cache layer until all ongoing reads and writes below the cache layer are completed. After all FlashCopy mappings in the consistency group are paused, the internal cluster state is set to allow FlashCopy operations. After the cluster state is set for all FlashCopy mappings in the consistency group, read and write operations are unpaused on the source VDisks. The target VDisks are brought online. As part of the startfcmap or startfcconsistgrp command, read and write caching is enabled for both the source and target VDisks. The following FlashCopy mapping properties can be modified: FlashCopy mapping name. Clean rate. Consistency group. Copy rate (for background copy). Automatic deletion of the mapping when the background copy is complete. There are two separate mechanisms by which a FlashCopy mapping can be stopped: You have issued a command. An I/O error has occurred. This command requests that the specified FlashCopy mapping be deleted. If the FlashCopy mapping is in the stopped state, the force flag must be used. If the flush of data from the cache cannot be completed, the FlashCopy mapping enters the stopped state. After all of the source data has been copied to the target and there are no dependent mappings, the state is set to copied. If the option to automatically delete the mapping after the background copy completes is specified, the FlashCopy mapping is automatically deleted. If this option is not specified, the FlashCopy mapping is not automatically deleted and can be reactivated by preparing and starting again. The node has failed.
Start
Modify
Stop
Delete
Bitmap Online/Offline
271
Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping exists between the source and target, but they behave as independent VDisks in this state.
Copying
The FlashCopy indirection layer governs all I/O to the source and target VDisks while the background copy is running. Reads and writes are executed on the target as though the contents of the source were instantaneously copied to the target during the startfcmap or startfcconsistgrp command. The source and target can be independently updated. Internally, the target depends on the source for some tracks. Read and write caching is enabled on the source and the target.
Stopped
The FlashCopy was stopped either by user command or by an I/O error. When a FlashCopy mapping is stopped, any useful data in the target VDisk is lost. Because of this, while the FlashCopy mapping is in this state, the target VDisk is in the offline state. In order to regain access to the target, the mapping must be started again (the previous PiT will be lost) or the FlashCopy mapping must be deleted. The source VDisk is accessible and read/write caching is enabled for the source. In the stopped state, a mapping can be prepared again or it can be deleted.
Stopping
The mapping is in the process of transferring data to an dependency mapping. The behavior of the target VDisk depends on whether the background copy process had completed while the mapping was in the copying state. If the copy process had completed, then the target VDisk remains online while the stopping copy process completes. If the copy process had not completed, then data in the cache is discarded for the target VDisk. The target VDisk is taken offline and the stopping copy process runs. When the data has been copied, then a stop complete asynchronous event is notified. The mapping will move to idle/copied state if the background copy has completed or to stopped if it has not. The source VDisk remains accessible for I/O.
Suspended
The target has been flashed from the source, and was in the copying or stopping state. Access to the metadata has been lost, and as a consequence, both source and target VDisks are offline. The background copy process has been halted. When the metadata becomes available again, the FlashCopy mapping will return to the copying or stopping state, access to the source and target VDisks will be restored, and the background copy or stopping process resumed. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in the cache, consuming resources, until the FlashCopy mapping leaves the suspended state.
Preparing
Since the FlashCopy function is placed logically below the cache to anticipate any write latency problem, it demands no read or write data for the target and no write data for the source in the cache at the time that the FlashCopy operation is started. This ensures that the resulting copy is consistent. 272
SAN Volume Controller V5.1
Performing the necessary cache flush as part of the startfcmap or startfcconsistgrp command unnecessarily delays the I/Os received after the startfcmap or startfcconsistgrp command is executed, since these I/Os must wait for the cache flush to complete. To overcome this problem, SVC FlashCopy supports the prestartfcmap or prestartfcconsistgrp command, which prepares for a FlashCopy start while still allowing I/Os to continue to the source VDisk. In the preparing state, the FlashCopy mapping is prepared by the following steps: 1. Flushing any modified write data associated with the source VDisk from the cache. Read data for the source will be left in the cache. 2. Placing the cache for the source VDisk into write through mode, so that subsequent writes wait until data has been written to disk before completing the write command received from the using host. 3. Discarding any read or write data associated with the target VDisk from the cache. While in this state, writes to the source VDisk will experience additional latency because the cache is operating in write through mode. While the FlashCopy mapping is in this state, the target VDisk is reported as online, but will not perform reads or writes. These are failed by the SCSI front end. Before starting the FlashCopy mapping, it is important that any cache at the host level, for example, buffers in the host OSs or applications, are also instructed to flush any outstanding writes to the source VDisk.
Prepared
When in the prepared state, the FlashCopy mapping is ready to perform a start. While the FlashCopy mapping is in this state, the target VDisk is in the offline state. In the prepared state, writes to the source VDisk experience additional latency because the cache is operating in write through mode.
Suspended Preparing
Offline Online
Write-back Write-through
273
The grains per second numbers represent the maximum number of grains the SVC will copy per second, assuming that the bandwidth to the MDisks can accommodate this rate. If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the SVC nodes to the MDisks, then background copy I/O contends for resources on an equal basis with I/O arriving from hosts. Both tend to see an increase in latency, and a consequential reduction in throughput. Both background copy and foreground I/O continue to make forward progress, and do not stop, hang, or cause the node to fail. The background copy is performed by both nodes of the I/O group in which the source VDisk resides.
6.4.15 Synthesis
The FlashCopy functionality in SVC simply creates copy VDisks. All the data in the source VDisk is copied to the destination VDisk. This includes operating system control information as well as application data and metadata. Some operating systems are unable to use FlashCopy without an additional step, which is termed synthesis. In summary, synthesis performs a type of transformation on the operating system metadata in the target VDisk so that the operating system can use the disk.
275
Node failure
Normally, two copies of the FlashCopy bitmaps are maintained, one on each of the two nodes making up the I/O group of the source VDisk. When a node fails, one copy of the bitmaps, for all FlashCopy mappings whose source VDisk is a member of the failing nodes I/O group, will become inaccessible. FlashCopy will continue with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the source I/O group. The cluster metadata is updated to indicate that the missing node no longer holds up-to-date bitmap information. When the failing node recovers, or a replacement node is added to the I/O group, up-to-date bitmaps will be re-established on the new node, and it will once again provide a redundant location for the bitmaps. When the FlashCopy bitmap becomes available again (at least one of the SVC nodes in the I/O group is accessible), the FlashCopy mapping will return to the copying state, access to the source and target VDisks will be restored, and the background copy process resumed. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in the cache until the FlashCopy mapping leaves the suspended state. Normally, two copies of the FlashCopy bitmaps are maintained (in non-volatile memory), one on each of the two SVC nodes making up the I/O group of the source VDisk. If only one of the SVC nodes in the I/O group that the source VDisk belongs to goes offline, then the FlashCopy mapping will continue in the copying state, with a single copy of the
276
FlashCopy bitmap. When the failed SVC node recovers, or a replacement SVC node is added to the I/O group, up-to-date FlashCopy bitmaps will be re-established on the resuming SVC node, and again provide a redundant location for the FlashCopy bitmaps. Note: If both nodes in the I/O group to which the target VDisk belongs become unavailable, then the host cannot access the target VDisk.
277
STOP_COMPLETED: This is logged when the FlashCopy mapping or consistency group has entered the stopped state as a result of a user request to stop. It will be logged once the automatic copy process has completed. This includes mappings where no copying needed to be performed. It is different from the error that is logged when a mapping or group enters the stopped state as a result of an IO error.
FlashCopy Destination
Not supported
278
279
280
281
282
This shows four clusters in a star topology, with A at the centre. Cluster A could be a central disaster-recovery site for the three other locations. Using a star topology it is possible to migrate different applications at different times using a process such as: 1. Suspend application at A 2. Remove the A B relationship 3. Create the A C relationship (or alternatively the B C relationship) 4. Synchronize to cluster C, and ensure A C is established A B, A C, A D, B C, B D, C D A B, A C, B C Figure 6-15 shows a triangle topology.
There are three clusters in a triangle topology. Figure 6-16 shows a fully-connected topology.
283
This is a fully-connected mesh where every cluster has partnership to each of the three others. This means VDisks could be replicated between any pair of clusters, but note that this is not required, unless relationships are needed between every pair of clusters. A B, B C, C D The other option is a daisy-chain between four clusters, where we can have a type of cascading solution however a VDisk must be in only one relationship such as A->B for example. At the time of writing a three site solution such as DS8000 MGM is not supported. Figure 6-17 shows a daisy-chain topology.
Unsupported topology
As an illustration of what is not supported, we show this example: A B, B C, C D, D E Figure 6-18 shows this unsupported topology.
284
This is unsupported, because five clusters are indirectly connected. If the cluster can detect this at the time of the fourth mkpartnership command, it will be rejected. Note: The introduction of Multi-Cluster Mirroring necessitates some upgrade restrictions: Concurrent Code Upgrade (CCU) to 5.1.0 is supported from 4.3.1.x only. If the cluster is in a partnership, then the partnered cluster must meet a minimum software level to allow concurrent IO: the partnered cluster must be running 4.2.1 or higher
285
applications need to perform large numbers of update operations in parallel with storage, maintaining write ordering is key to ensuring correct operation of applications following a disruption. An application, for example, databases, that is performing a large set of updates is usually designed with the concept of dependent writes. These are writes where it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine applications algorithms and can lead to problems, such as detected, or undetected, data corruption.
The database ensures correct ordering of these writes by waiting for each step to complete before starting the next.
286
Note: All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored to a point beyond the last full, offline backup, logs are required to roll the data forward to the point of failure. But imagine if the database log and database itself are on different VDisks and a Metro Mirror relationship is stopped during this update. In this case, you need to consider the possibility that the Metro Mirror relationship for the VDisk with the database file is stopped slightly before the VDisk containing the database log. If this were the case, then it could be possible that the secondary VDisks see writes (1) and (3), but not (2). Then, if the database was restarted using data available from secondary disks, the database log would indicate that the transaction had completed successfully, when that is not the case. In this scenario, the integrity of the database is in question.
287
Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror consistency groups can provide the ability to group relationships, so that they are manipulated in unison. Metro Mirror relationships within a consistency group can be in any form: Metro Mirror relationships can be part of a consistency group, or be stand-alone and therefore handled as single instances. A consistency group can contain zero or more relationships. An empty consistency group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All the relationships in a consistency group must have matching master and auxiliary SVC clusters. Although it is possible to use consistency groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to some undesired side effects. The rules behind a consistency group means that certain configuration commands are prohibited. Where this would not be the case was if the relationship was not part of a consistency group. For example, consider the case of two applications that are completely independent, yet they are placed into a single consistency group. In the event of an error, there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Metro Mirror rejects attempts to enable access to secondary VDisks of either application. If one application finishes its background copy much more quickly than the other, Metro Mirror still refuses to grant access to its secondary, even though it is safe in this case, 288
SAN Volume Controller V5.1
because Metro Mirror policy is to refuse access to the entire consistency group if any part of it is inconsistent. Stand-alone relationships and consistency groups share a common configuration and state model. All the relationships in a non-empty consistency group have same state as the consistency group.
289
If the designated node should fail (or all its logins to the remote cluster fail), then a new node is chosen to carry control traffic. This causes I/O to pause, but does not cause relationships to become Consistent Stopped.
290
If these steps are not performed correctly, then Metro Mirror will report the relationship as being consistent when it is not. This is likely to make any secondary disk useless. This method has an advantage over full synchronization, in that it does not require all the data to be copied over a constrained link. However, if data needs to be copied, the master and auxiliary disks cannot be used until the copy is complete, which might be unacceptable.
291
When creating the Metro Mirror relationship, you can specify if the auxiliary VDisk is already in sync with the master VDisk, and the background copy process is then skipped. This is especially useful when creating Metro Mirror relationships for VDisks that have been created with the format option. Create the relationship as follows: 1. Step 1 is done as follows: a. The Metro Mirror relationship is created with the -sync option and the Metro Mirror relationship enters the Consistent stopped state. b. The Metro Mirror relationship is created without specifying that the master and auxiliary VDisks are in sync, and the Metro Mirror relationship enters the Inconsistent stopped state. 2. Step 2 is done as follows: a. When starting a Metro Mirror relationship in the Consistent stopped state, it enters the Consistent synchronized state. This implies that no updates (write I/O) have been performed on the primary VDisk while in the Consistent stopped state, otherwise the -force option must be specified, and the Metro Mirror relationship then enters the Inconsistent copying state, while the background copy is started. b. When starting a Metro Mirror relationship in the Inconsistent stopped state, it enters the Inconsistent copying state, while the background copy is started.
292
3. Step 3 is done as follows: When the background copy completes, the Metro Mirror relationship transits from the Inconsistent copying state to the Consistent synchronized state. 4. Step 4 is done as follows: a. When stopping a Metro Mirror relationship in the Consistent synchronized state, specifying the -access option, which enables write I/O on the secondary VDisk, the Metro Mirror relationship enters the Idling state. b. To enable write I/O on the secondary VDisk, when the Metro Mirror relationship is in the Consistent stopped state, issue the command svctask stoprcrelationship specifying the -access option, and the Metro Mirror relationship enters the Idling state. Step 5 is done as follows: a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary VDisk) while in the Idling state, the Metro Mirror relationship enters the Consistent synchronized state. b. If write I/O has been performed to either the master or the auxiliary VDisk, then the -force option must be specified, and the Metro Mirror relationship then enters the Inconsistent copying state, while the background copy is started. Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied: For example, this means that the Metro Mirror relationships in the Consistent synchronized state enter the Consistent stopped state and the Metro Mirror relationships in the Inconsistent copying state enter the Inconsistent stopped state. In case the connection is broken between the SVC clusters in a partnership, then all (intercluster) Metro Mirror relationships enter a disconnected state. For further information, refer to Connected versus disconnected on page 293. Note: Stand-alone relationships and consistency groups share a common configuration and state model. This means that all Metro Mirror relationships in a non-empty consistency group have the same state as the consistency group.
293
In this scenario, each cluster is left with half the relationship and has only a portion of the information that was available to it before. Some limited configuration activity is possible, and is a subset of what was possible before. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship, and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected once again. Metro Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state it was in when it became disconnected or it can enter a different connected state. Relationships that are configured between VDisks in the same SVC cluster (intracluster) will never be described as being in a disconnected state.
294
The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Metro Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a consistency group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems; therefore, consistency must operate across all those disks. When deciding how to use consistency groups, the administrator must consider the scope of an applications data, taking into account all the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, then either of the following actions might occur: All the data accessed by the group of systems must be placed into a single consistency group. The systems must be recovered independently (each within its own consistency group). Then, each system must perform recovery with the other applications to become consistent with them.
295
InconsistentStopped
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either. A copy process needs to be started to make the secondary consistent. This state is entered when the relationship or consistency group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or consistency group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transits to IdlingDisconnected.
InconsistentCopying
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or consistency group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or consistency group. In this state, a background copy process runs that copies data from the primary to the secondary virtual disk. In the absence of errors, an InconsistentCopying relationship is active, and the Copy Progress increases until the copy process completes. In some error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or consistency group into an InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a consistency group, the relationship or consistency group transits to ConsistentSynchronized state. If the relationship or consistency group becomes disconnected, then the secondary side transits to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.
ConsistentStopped
This is a connected state. In this state, the secondary contains a consistent image, but it might be out-of-date with respect to the primary. This state can arise when a relationship was in a Consistent Synchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE. Normally, following an I/O error, subsequent write activity cause updates to the primary and the secondary is no longer synchronized (set to FALSE). In this case, to re-establish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this, and the relationship or consistency group transits to InconsistentCopying. Do this only after all outstanding errors are repaired. In the unusual case where the primary and secondary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the
296
relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or consistency group to ConsistentSynchronized and reverses the roles of the primary and secondary. If the relationship or consistency group becomes disconnected, then the secondary side transits to ConsistentDisconnected. The primary side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or consistency group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.
ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O, and the secondary VDisk is accessible for read-only I/O. Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks. Either good completion must be received for both writes, the write must be failed to the host, or a state must transit out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the primary and secondary roles. A start command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the same transitions are made as for ConsistentStopped.
Idling
This is a connected state. Both master and auxiliary disks are operating in the primary role. Consequently, both are accessible for write I/O. In this state, the relationship or consistency group accepts a start command. Metro Mirror maintains a record of regions on each disk that received write I/O while Idling. This is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either VDisk in any relationship has received write I/O. This is indicated by the synchronized status. If the start command leads to loss of consistency, then the -force parameter must be specified. Following a start command, the relationship or consistency group transits to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is such a loss. Also, while in this state, the relationship or consistency group accepts a -clean option on the start command. If the relationship or consistency group becomes disconnected, then both sides change their state to IdlingDisconnected.
IdlingDisconnected
This is a disconnected state. The VDisk or disks in this half of the relationship or consistency group are all in the primary role and accept read or write I/O.
Chapter 6. Advanced Copy Services
297
The main priority in this state is to recover the link and make the relationship or consistency group connected once more. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transits to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or consistency group, which depends on: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, then the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transits from TRUE to FALSE) and the relationship was not already stopped (either through user stop or a persistent error), then an error log is raised to notify you of this situation. This error log is the same as that raised when the same situation arises for ConsistentSynchronized.
InconsistentDisconnected
This is a disconnected state. The virtual disks in this half of the relationship or consistency group are all in the secondary role and do not accept read or write I/O. No configuration activity except for deletes is permitted until the relationship becomes connected again. When the relationship or consistency group becomes connected again, the relationship becomes InconsistentCopying automatically unless either: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or consistency group becomes InconsistentStopped.
ConsistentDisconnected
This is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the secondary side of a relationship becomes disconnected. In this state, the relationship or consistency group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or consistency group was known to be consistent. This corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to TRUE transits the relationship or consistency group to the IdlingDisconnected state. This allows write I/O to be performed to the secondary VDisk and is used as part of a disaster recovery scenario.
298
When the relationship or consistency group becomes connected again, the relationship or consistency group becomes ConsistentSynchronized only if this does not lead to a loss of Consistency. This is the case provided: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the primary while disconnected. Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.
Empty
This state only applies to consistency groups. It is the state of a consistency group that has no relationships and no other state information to show. It is entered when a consistency group is first created. It is exited when the first relationship is added to the consistency group, at which point the state of the relationship becomes the state of the consistency group.
Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a Status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between all the nodes that are performing background copy for one of the eligible relationships. This allocation is made irrespective of the number of disks that the node is responsible for. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.
299
To enable access to the secondary VDisk for host operations, the Metro Mirror relationship must be stopped by specifying the -access parameter. While access to the secondary VDisk for host operations is enabled, the host must be instructed to mount the VDisk and related tasks before the application can be started, or instructed to perform a recovery process. For example, the Metro Mirror requirement to enable the secondary copy for access differentiates it from third-party mirroring software on the host, which aims to emulate a single, reliable disk regardless of what system is accessing it. Metro Mirror retains the property that there are two volumes in existence, but suppresses one while the copy is being maintained. Using a secondary copy demands a conscious policy decision by the administrator, that a failover is required, and the tasks to be performed on the host involved in establishing operation on the secondary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.
6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions
Table 6-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a single VDisk.
Table 6-7 VDisk valid combination. FlashCopy FlashCopy Source FlashCopy Target Metro Mirror or Global Mirror Primary Supported Not supported Metro Mirror or Global Mirror Secondary Supported Not supported
There is a per I/O group limit of 1024 TB on the quantity of Primary and Secondary VDisk address space which may participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512MB of bitmap space for the IO Group and allow no FlashCopy bitmap space.
300
svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This is a prerequisite for creating Metro Mirror relationships.
svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Metro Mirror partnership, you must issue this command to both clusters. This step is a prerequisite to creating Metro Mirror relationships between VDisks on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the
301
bandwidth defaults to 50 MBps. The bandwidth should be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.
svctask chpartnership
In case it is needed to change the bandwidth available for background copy in an SVC cluster partnership, the command svctask chpartnership can be used to specify the new bandwidth.
svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror consistency group. The Metro Mirror consistency group name must be unique across all consistency groups known to the clusters owning this consistency group. If the consistency group involves two clusters, the clusters must be in communication throughout the creation process. The new consistency group does not contain any relationships and will be in the empty state. Metro Mirror relationships can be added to the group either upon creation or afterwards using the svctask chrelationship command.
302
svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship. This relationship persists until it is deleted. The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if both VDisks are in the same cluster, they must both be in the same I/O group. The master and auxiliary VDisk cannot be in an existing relationship, and cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Metro Mirror relationship, it can be added to an already existing consistency group, or be a stand-alone Metro Mirror relationship if no consistency group is specified. To check whether the master or auxiliary VDisks comply with the prerequisites to participate in a Metro Mirror relationship, use the command svcinfo lsrcrelationshipcandidate.
svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available VDisks that are eligible for a Metro Mirror relationship. When issuing the command, you can specify the master VDisk name and auxiliary cluster to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the command is issued with no flags, all VDisks that are not disallowed by some other configuration state, such as being a FlashCopy target, are listed.
svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Metro Mirror relationship: Change the name of a Metro Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Note: When adding a Metro Mirror relationship to a consistency group that is not empty, the relationship must have the same state and copy direction as the group in order to be added to it.
svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror consistency group.
303
svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro Mirror relationship. When issuing the command, the copy direction can be set, if it is undefined, and optionally mark the secondary VDisk of the relationship as clean. The command fails it if it is used to attempt to start a relationship that is part of a consistency group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error. If the resumption of the copy process leads to a period when the relationship is not consistent, then you must specify the -force flag when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original primary of the relationship. The use of the -force flag here is a reminder that the data on the secondary will become inconsistent while resynchronization (background copying) occurs, and therefore the date is not usable for disaster recovery purposes before the background copy has completed. In the idling state, you must specify the primary VDisk to indicate the copy direction. In other connected states, you can provide the primary argument, but it must match the existing setting.
svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent secondary VDisk by specifying the -access flag. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a consistency group. You can issue this command to stop a relationship that is copying from primary to secondary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue a svctask startrcrelationship command. Write activity is no longer copied from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized state, this command causes a consistency freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state) then the -access parameter can be used with the stoprcrelationship command to enable write access to the secondary VDisk.
304
svctask startrcconsistgrp
The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group. This command can only be issued to a consistency group that is connected. For a consistency group that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error.
svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro Mirror consistency group. It can also be used to enable write access to the secondary VDisks in the group if the group is in a consistent state. If the consistency group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the primary to the secondary VDisks belonging to the relationships in the group. For a consistency group in the ConsistentSynchronized state, this command causes a consistency freeze. When a consistency group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), then the -access argument can be used with the svctask stoprcconsistgrp command to enable write access to the secondary VDisks within that group.
svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two VDisks. It does not affect the VDisks themselves. If the relationship is disconnected at the time that the command is issued, then the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still wish to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. If you delete an inconsistent relationship, the secondary VDisk becomes accessible even though it is still inconsistent. This is the one case in which Metro Mirror does not inhibit access to inconsistent data.
305
svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group. This command deletes the specified consistency group. You can issue this command for any existing consistency group. If the consistency group is disconnected at the time that the command is issued, then the consistency group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the consistency group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the consistency group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the consistency group is not empty, then the relationships within it are removed from the consistency group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the consistency group.
svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of primary and secondary VDisk when a stand-alone relationship is in a consistent state. When issuing the command, the desired primary is specified.
svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of primary and secondary VDisk when a consistency group is in a consistent state. This change is applied to all the relationships in the consistency group, and when issuing the command, the desired primary is specified.
306
307
308
The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over long distance links with higher latency, without requiring the hosts to wait for the full round-trip delay of the long distance link. Figure 6-23 shows that a write operation to the master VDisk is acknowledged back to the host issuing the write before it is mirrored to the cache for the auxiliary VDisk.
The Global Mirror algorithms operate so as to maintain a consistent image at the secondary at all times. They achieve this by identifying sets of I/Os that are active concurrently at the primary, assigning an order to those sets, and applying these sets of IOs in the assigned order at the secondary. As a result Global Mirror maintains the features of Write Ordering and Read Stability described in this chapter. The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary cluster, and so is not subject to the latency of the long distance link. These two elements of the protocol ensure that the throughput of the total cluster can be grown by increasing cluster size, while maintaining consistency across a growing data set. In a failover scenario, where the secondary site needs to become the primary source of data, some updates might be missing at the secondary site. Therefore, any applications that will use this data must have some external mechanism for recovery the missing updates and reapplying them, for example, transaction log replay.
309
SVC supports intracluster Global Mirror, where both VDisks belong to the same cluster (and IO Group). Although, as stated earlier, this functionality is better suited to Metro Mirror. SVC supports intercluster Global Mirror, where each VDisk belongs to their separate SVC cluster. A given SVC cluster can be configured for partnership with between one and three other clusters. Intercluster and intracluster Global Mirror can be used concurrently within a cluster for different relationships. SVC does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the SVC maintains a control link between the two clusters. This control link is used to control state and coordinate updates at either end. The control link is implemented on top of the same FC fabric connection as the SVC uses for Global Mirror I/O. SVC implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC maintains and polices a strong concept of consistency and makes this available to guide configuration activity. SVC implements flexible resynchronization support, enabling it to re-synchronize VDisk pairs that have suffered write I/O to both disks and to resynchronize only those regions that are known to have changed. Colliding writes: Prior to 4.3.1, the Global Mirror algorithm requires that only a single write be active on any given 512 bytes LBA of a virtual disk. If a further write is received from a host while the secondary write is still active, even though the primary write might have completed, the new host write will be delayed until the secondary write is complete. This restriction is needed in case a series of writes to the secondary have to be retried (reconstruction). Conceptually, the data for reconstruction comes from the primary VDisk. If more than one write is allowed to be applied to the primary for a given sector, then only the most recent write will get the correct data during reconstruction and if reconstruction were interrupted for any reason, the intermediate state of the secondary would not be consistent. Applications that deliver such write activity will not achieve the performance that Global Mirror is intended to support. A VDisk statistic is maintained of the frequency of these collisions. From 4.3.1 onwards, an attempt is made to allow multiple writes to a single location be outstanding in the Global Mirror algorithm. There is still a need for primary writes to be serialized, and the intermediate states of the primary data must be kept in a non-volatile journal while the writes are outstanding to maintain the correct write ordering during reconstruction. Reconstruction must never overwrite data on the secondary with an earlier version. The VDisk statistic monitoring colliding writes is now limited to those writes which are not affected by this change. Figure 6-24 on page 311 shows a colliding write sequence example.
310
In Figure 6-24 you can see the following. (1) Original GM write in progress (2) Second write to same sector and in flight write logged to Journal file (3 & 4) 2nd Write to secondary cluster (5) Initial write completes An optional feature for Global Mirror permits a delay simulation to be applied on writes sent to secondary virtual disks. This allows testing to be performed that detects colliding writes and so can be used to test an application before full deployment of the feature. The feature can be enabled separately for each of intra-cluster or inter-cluster Global Mirror. The delay setting is specified using the command chcluster and viewed using lscluster. gm_intra_delay_simulation expresses the amount of time that intra-cluster secondary IOs are delayed. gm_inter_delay_simulation expresses the amount of time that inter-cluster secondary IOs are delayed. A value of zero disables the feature. SVC 5.1 introduces Multi-Cluster Mirroring (MCM). The rules for a GM MCM environment are the same as in an MM environment. For more detailed information see Chapter 6.5.4, Multi-Cluster-Mirroring (MCM) on page 281.
311
A Global Mirror relationship is composed of two VDisks that are equal in size. The master VDisk and the auxiliary VDisk can be in the same I/O group, within the same SVC cluster (intracluster Global Mirror), or can be on separate SVC clusters that are defined as SVC partners (intercluster Global Mirror). Note: Be aware that: A VDisk can only be part of one Global Mirror relationship at a time. A VDisk that is a FlashCopy target cannot be part of a Global Mirror relationship.
312
1. A write is executed to update the database log, indicating that a database update is to be performed. 2. A second write is executed to update the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The write sequence is illustrated in Figure 6-26.
The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next. Note: All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored to a point beyond the last full, offline backup, logs are required to roll the data forward to the point of failure. But imagine if the database log and the database itself are on different VDisks and a Global Mirror relationship is stopped during this update. In this case, you need to consider the possibility that the Global Mirror relationship for the VDisk with the database file is stopped slightly before the VDisk containing the database log. If this were the case, then it could be possible that the secondary VDisks see writes (1) and (3) but not (2). Then, if the database was restarted using the data available from the secondary disks, the database log would indicate that the transaction had completed successfully, when this is not the case. In this scenario, the integrity of the database is in question.
313
Certain uses of Global Mirror require manipulation of more than one relationship. Global Mirror consistency groups can provide the ability to group relationships so that they are manipulated in unison. Global Mirror relationships within a consistency group can be in any form: Global Mirror relationships can be part of a consistency group, or be stand-alone and therefore handled as single instances.
314
A consistency group can contain zero or more relationships. An empty consistency group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All the relationships in a consistency group must have matching master and auxiliary SVC clusters. Although it is possible to use consistency groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to some undesired side effects. The rules behind a consistency group mean that certain configuration commands are prohibited. Where this would not be the case was if the relationship was not part of a consistency group. For example, consider the case of two applications that are completely independent, yet they are placed into a single consistency group. In the event of an error there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Global Mirror rejects attempts to enable access to secondary VDisks of either application. If one application finishes its background copy much more quickly than the other, Global Mirror still refuses to grant access to its secondary, even though it is safe in this case, because Global Mirror policy is to refuse access to the entire consistency group if any part of it is inconsistent. Stand-alone relationships and consistency groups share a common configuration and state model. All the relationships in a non-empty consistency group have the same state as the consistency group.
315
To handle error conditions, the SVC can be configured to raise SNMP traps, e-mail or if a TPC-R is in place, the TPC-R could control the links status and alert using SNMP traps or e-mail too.
316
317
6. To access the auxiliary VDisk, the Global Mirror relationship must be stopped with the access option enabled, before write I/O is submitted to the secondary. 7. The remote host server is mapped to the auxiliary VDisk and the disk is available for I/O.
318
With this technique, only the data that has changed since the relationship was created, including all regions that were incorrect in the tape image, are copied from master and auxiliary. As with Synchronized before creation on page 318, the copy step must be performed correctly, or else the auxiliary will be useless, although the copy will report it as being synchronized.
When creating the Global Mirror relationship, you can specify if the auxiliary VDisk is already in sync with the master VDisk, and the background copy process is then skipped. This is especially useful when creating Global Mirror relationships for VDisks that have been created with the format option. The following steps explain the Global Mirror state diagram: 1. Step 1 is done as follows: a. The Global Mirror relationship is created with the -sync option and the Global Mirror relationship enters the Consistent stopped state. b. The Global Mirror relationship is created without specifying that the master and auxiliary VDisks are in sync, and the Global Mirror relationship enters the Inconsistent stopped state.
319
2. Step 2 is done as follows: a. When starting a Global Mirror relationship in the Consistent stopped state, it enters the Consistent synchronized state. This implies that no updates (write I/O) have been performed on the primary VDisk while in the Consistent stopped state, otherwise the -force option must be specified, and the Global Mirror relationship then enters the Inconsistent copying state, while the background copy is started. b. When starting a Global Mirror relationship in the Inconsistent stopped state, it enters the Inconsistent copying state, while the background copy is started. 3. Step 3 is done as follows: a. When the background copy completes, the Global Mirror relationship transits from the Inconsistent copying state to the Consistent synchronized state. 4. Step 4 is done as follows: a. When stopping a Global Mirror relationship in the Consistent synchronized state, where specifying the -access option enables write I/O on the secondary VDisk, the Global Mirror relationship enters the Idling state. b. To enable write I/O on the secondary VDisk, when the Global Mirror relationship is in the Consistent stopped state, issue the command svctask stoprcrelationship, specifying the -access option, and the Global Mirror relationship enters the Idling state. 5. Step 5 is done as follows: a. When starting a Global Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary VDisk) while in the Idling state, the Global Mirror relationship enters the Consistent synchronized state. b. In case write I/O has been performed to either the master or the auxiliary VDisk, then the -force option must be specified, and the Global Mirror relationship then enters the Inconsistent copying state, while the background copy is started. Stop or Error: When a Global Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied. For example, this means that Global Mirror relationships in the Consistent synchronized state enter the Consistent stopped state and Global Mirror relationships in the Inconsistent copying state enter the Inconsistent stopped state. In a case where the connection is broken between the SVC clusters in a partnership, then all (intercluster) Global Mirror relationships enter a disconnected state. For further information, refer to Connected versus disconnected on page 320. Note: Stand-alone relationships and consistency groups share a common configuration and state model. This means that all the Global Mirror relationships in a non-empty consistency group have the same state as the consistency group.
320
Under certain error scenarios, communications between the two clusters might be lost. For example, power might fail causing one complete cluster to disappear. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other. When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected. In this scenario, each cluster is left with half the relationship and has only a portion of the information that was available to it before. Some limited configuration activity is possible, and is a subset of what was possible before. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship, and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected once again. Global Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state it was in when it became disconnected or it can enter a different connected state. Relationships that are configured between virtual disks in the same SVC cluster (intracluster) will never be described as being in a disconnected state.
321
Write ordering Read stability for correct operation at the secondary If a relationship, or a set of relationships, is inconsistent and an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an error code. The application might fail to detect that the data is corrupt and return erroneous data. The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Global Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a consistency group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems and therefore consistency must operate across all those disks. When deciding how to use consistency groups, the administrator must consider the scope of an applications data, taking into account all the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, then either of the following actions might occur: All the data accessed by the group of systems must be placed into a single consistency group. The systems must be recovered independently (each within its own consistency group). Then, each system must perform recovery with the other applications to become consistent with them.
322
InconsistentStopped
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either. A copy process needs to be started to make the secondary consistent. This state is entered when the relationship or consistency group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or consistency group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transits to IdlingDisconnected.
InconsistentCopying
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or consistency group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or consistency group. In this state, a background copy process runs that copies data from the primary to the secondary virtual disk. In the absence of errors, an InconsistentCopying relationship is active, and the Copy Progress increases until the copy process completes. In some error situations, the copy progress might freeze or even regress. A persistent error or Stop command places the relationship or consistency group into the InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a consistency group, the relationship or consistency group transits to the ConsistentSynchronized state. If the relationship or consistency group becomes disconnected, then the secondary side transits to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.
ConsistentStopped
This is a connected state. In this state, the secondary contains a consistent image, but it might be out-of-date with respect to the primary. This state can arise when a relationship was in Consistent Synchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE.
323
Normally, following an I/O error, subsequent write activity cause updates to the primary and the secondary is no longer synchronized (set to FALSE). In this case, to re-establish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this, and the relationship or consistency group transits to InconsistentCopying. Do this only after all outstanding errors are repaired. In the unusual case where the primary and secondary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or consistency group to ConsistentSynchronized and reverses the roles of the primary and secondary. If the relationship or consistency group becomes disconnected, then the secondary side transits to ConsistentDisconnected. The primary side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or consistency group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.
ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O. The secondary VDisk is accessible for read-only I/O. Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks. Either good completion must be received for both writes, the write must be failed to the host, or a state must transit out of ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the primary and secondary roles. A start command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the same transitions are made as for ConsistentStopped.
Idling
This is a connected state. Both master and auxiliary disks are operating in the primary role. Consequently, both are accessible for write I/O. In this state, the relationship or consistency group accepts a start command. Global Mirror maintains a record of regions on each disk that received write I/O while Idling. This is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either VDisk in any relationship has received write I/O. This is indicated by the synchronized status. If the start command leads to loss of consistency, then a -force parameter must be specified. Following a start command, the relationship or consistency group transits to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is such a loss.
324
Also, while in this state, the relationship or consistency group accepts a -clean option on the start command. If the relationship or consistency group becomes disconnected, then both sides change their state to IdlingDisconnected.
IdlingDisconnected
This is a disconnected state. The VDisk or disks in this half of the relationship or consistency group are all in the primary role and accept read or write I/O. The main priority in this state is to recover the link and make the relationship or consistency group connected once more. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transits to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or consistency group, which depends on: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, then the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transits from TRUE to FALSE) and the relationship was not already stopped (either through user stop or a persistent error), then an error log is raised to notify this. This error log is the same as that raised when the same situation arises when ConsistentSynchronized.
InconsistentDisconnected
This is a disconnected state. The virtual disks in this half of the relationship or consistency group are all in the secondary role and do not accept read or write I/O. No configuration activity except for deletes is permitted until the relationship becomes connected again. When the relationship or consistency group becomes connected again, the relationship becomes InconsistentCopying automatically unless either: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or consistency group becomes InconsistentStopped.
ConsistentDisconnected
This is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the secondary side of a relationship becomes disconnected. In this state, the relationship or consistency group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or consistency group was known to be consistent. This corresponds to the time of the last successful heartbeat to the other cluster.
Chapter 6. Advanced Copy Services
325
A stop command with the -access flag set to TRUE transits the relationship or consistency group to the IdlingDisconnected state. This allows write I/O to be performed to the secondary VDisk and is used as part of a disaster recovery scenario. When the relationship or consistency group becomes connected again, the relationship or consistency group becomes ConsistentSynchronized only if this does not lead to a loss of Consistency. This is the case provided: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the primary while disconnected. Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.
Empty
This state only applies to consistency groups. It is the state of a consistency group that has no relationships and no other state information to show. It is entered when a consistency group is first created. It is exited when the first relationship is added to the consistency group, at which point the state of the relationship becomes the state of the consistency group.
While access to the secondary VDisk for host operations is enabled, the host must be instructed to mount the VDisk and other related tasks, before the application can be started or instructed to perform a recovery process. Using a secondary copy demands a conscious policy decision by the administrator that a failover is required, and the tasks to be performed on the host involved in establishing operation on the secondary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.
6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions
Table 6-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a VDisk.
Table 6-9 VDisk valid combination. FlashCopy FlashCopy Source FlashCopy Target Metro Mirror or Global Mirror Primary Supported Not supported Metro Mirror or Global Mirror Secondary Supported Not supported
There is a per I/O group limit of 1024 TB on the quantity of Primary and Secondary VDisk address space which may participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512MB of bitmap space for the IO Group and allow no FlashCopy bitmap space.
327
Commands to create, delete, and manipulate relationships and consistency groups Commands that cause state changes Where a configuration command affects more than one cluster, Global Mirror performs the work to coordinate configuration activity between the clusters. Some configuration commands can only be performed when the clusters are connected and fail with no effect when they are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Global Mirror when the clusters become connected once more. For any given command, with one exception, a single cluster actually receives the command from the administrator. This is significant for defining the context for a CreateRelationShip (mkrcrelationship) or CreateConsistencyGroup (mkrcconsistgrp) command, in which case, the cluster receiving the command is called the local cluster. This exception, as mentioned previously, is the command that sets clusters into a Global Mirror partnership. The mkpartnership command must be issued to both the local and to the remote cluster. The commands are described here as an abstract command set. These are implemented as: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks
svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This is a prerequisite for creating Global Mirror relationships. To display the characteristics of the cluster, use the command svcinfo lscluster, specifying the name of the cluster.
svctask chcluster
There are three parameters for Global Mirror in the command output: -gmlinktolerance link_tolerance Specifies the maximum period of time that the system will tolerate delay before stopping GM relationships. Specify values between 60 and 86400 seconds in increments of 10 seconds. The default value is 300. Do not change this value except under direction of IBM support. -gminterdelaysimulation link_tolerance Specifies the number of milliseconds that I/O activity (intercluster copying to a secondary VDisk) is delayed. This permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intercluster Global Mirror relationship separately. -gmintradelaysimulation link_tolerance
328
Specifies the number of milliseconds that I/O activity (intracluster copying to a secondary VDisk) is delayed. This permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intracluster Global Mirror relationship separately. Using svctask chcluster to adjust these values should be done as follows: svctask chcluster -gmlinktolerance 300 You can view all the above parameter values with the svcinfo lscluster <clustername> command.
gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note. If the poor response extends past the specified tolerance, a 1920 error is logged and one or more Global Mirror relationships are automatically stopped. This protects the application hosts at the primary site. During normal operation, application hosts see a minimal impact to response times because the Global Mirror feature uses asynchronous replication. However, if Global Mirror operations experience degraded response times from the secondary cluster for an extended period of time, I/O operations begin to queue at the primary cluster. This results in an extended response time to application hosts. In this situation, the gmlinktolerance feature stops Global Mirror relationships and the application hosts response time returns to normal. After a 1920 error has occurred, the Global Mirror auxiliary VDisks are no longer in the consistent_synchronized state until you fix the cause of the error and restart your Global Mirror relationships. For this reason, ensure that you monitor the cluster to track when this occurs. You can disable the gmlinktolerance feature by setting the gmlinktolerance value to0 (zero). However, the gmlinktolerance cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature in the following circumstances: During SAN maintenance windows where degraded performance is expected from SAN components and application hosts can withstand extended response times from Global Mirror VDisks. During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the Global Mirror relationships. For example, if you are testing using an I/O generator which is configured to stress the backend storage, the gmlinktolerance feature might detect the high latency and stop the Global Mirror relationships. Disabling gmlinktolerance prevents this at the risk of exposing the test host to extended response times. We suggest using a script to periodically monitor GM status. Example 6-2 shows an example of a script in ksh to check GM status.
Example 6-2 Script example
[AIX1@root] /usr/GMC > cat checkSVCgm #!/bin/sh # # Description # # GM_STATUS GM Status variable # HOSTsvcNAME SVC cluster ipaddress # PARA_TEST Consistent syncronized variable
Chapter 6. Advanced Copy Services
329
# PARA_TESTSTOPIN Stop inconsistent variable # PARA_TESTSTOP Consistent stopped variable # IDCONS Consistency Group ID variable # variable definition HOSTsvcNAME="128.153.3.237" IDCONS=255 PARA_TEST="consistent_synchronized" PARA_TESTSTOP="consistent_stopped" PARA_TESTSTOPIN="inconsistent_stopped" FLOG="/usr/GMC/log/gmtest.log" VAR=0 # Start Programm if [[ $1 == "" ]] then CICLI="true" fi while $CICLI do GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG if [[ $GM_STATUS = $PARA_TEST ]] then sleep 600 else sleep 600 GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]] then ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS TESTEX=`echo $?` echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG fi GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP ]] then echo "`date` Global Mirror restarted <$GM_STATUS>" else echo "`date` ERROR Global Mirro Failed <$GM_STATUS>" fi sleep 600 fi ((VAR+=1)) done PARA_TESTSTOP="consistent_stopped" The idea behind the script in Example 6-2 on page 329 is: Check Global Mirror status every 600 seconds. If the status is Consistent_Syncronized wait another 600 seconds and test again If the status is Consistent_Stopped or Inconsistent_Stopped, wait another 600 seconds and then try to restart GM. If the status is one of these then probably we have a 1920 error 330
SAN Volume Controller V5.1
scenario, and this means we could have a performance problem. Waiting 600 seconds before restarting Global Mirror could give the SVC enough time to deliver the high workload requested by the server. As Global Mirror has been stopped for 10 minutes (600 seconds), the secondary copy is now out of date by this amount of time and needs to be re-synchronized. Note: The script described in Example 6-2 on page 329 is supplied as-is. A 1920 error indicates that one or more of the SAN components is unable to provide the performance that is required by the application hosts. This can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or unexpected host I/O workload). If you are experiencing 1920 errors, we suggest that you install a SAN performance analysis tool, such as the IBM Tivoli Storage Productivity Center, and make sure that it is correctly configured and monitoring statistics to look for problems, and to try to prevent them.
svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Global Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Global Mirror partnership, you must issue this command on both clusters. This step is a prerequisite for creating Global Mirror relationships between VDisks on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth should be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.
331
calculation as above or alternatively by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable and then backing off to accommodate peaks in workload and some additional safety margin.
svctask chpartnership
To change the bandwidth available for background copy in an SVC cluster partnership, the command svctask chpartnership can be used to specify the new bandwidth.
svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new, empty Global Mirror consistency group. The Global Mirror consistency group name must be unique across all consistency groups known to the clusters owning this consistency group. If the consistency group involves two clusters, the clusters must be in communication throughout the creation process. The new consistency group does not contain any relationships and will be in the empty state. Global Mirror relationships can be added to the group, either upon creation or afterwards, using the svctask chrelationship command.
svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Global Mirror relationship. This relationship persists until it is deleted. The auxiliary virtual disk must be equal in size to the master virtual disk or the command will fail, and if both VDisks are in the same cluster, they must both be in the same I/O group. The master and auxiliary VDisk cannot be in an existing relationship, and they cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Global Mirror relationship, it can be added to an already existing consistency group, or be a stand-alone Global Mirror relationship if no consistency group is specified. To check whether the master or auxiliary VDisks comply with the prerequisites to participate in a Global Mirror relationship, use the command svcinfo lsrcrelationshipcandidate, as shown in svcinfo lsrcrelationshipcandidate on page 332.
svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list the available VDisks eligible to form a Global Mirror relationship.
332
When issuing the command, you can specify the master VDisk name and auxiliary cluster to list candidates that comply with the prerequisites to create a Global Mirror relationship. If the command is issued with no parameters, all VDisks that are not disallowed by some other configuration state, such as being a FlashCopy target, are listed.
svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Global Mirror relationship: Change the name of a Global Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Note: When adding a Global Mirror relationship to a consistency group that is not empty, the relationship must have the same state and copy direction as the group in order to be added to it.
svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Global Mirror consistency group.
svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Global Mirror relationship. When issuing the command, the copy direction can be set if undefined, and, optionally, mark the secondary VDisk of the relationship as clean. The command fails if it is used as an attempt to start a relationship that is already a part of a consistency group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error. If the resumption of the copy process leads to a period when the relationship is not consistent, then you must specify the -force parameter when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original primary of the relationship. The use of the -force parameter here is a reminder that the data on the secondary will become inconsistent while resynchronization
Chapter 6. Advanced Copy Services
333
(background copying) takes place, and therefore is not usable for disaster recovery purposes before the background copy has completed. In the idling state, you must specify the primary VDisk to indicate the copy direction. In other connected states, you can provide the primary argument, but it must match the existing setting.
svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent secondary VDisk by specifying the -access parameter. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a consistency group. You can issue this command to stop a relationship that is copying from primary to secondary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), then the -access parameter can be used with the svctask stoprcrelationship command to enable write access to the secondary virtual disk.
svctask startrcconsistgrp
The svctask startrcconsistgrp command is used to start a Global Mirror consistency group. This command can only be issued to a consistency group that is connected. For a consistency group that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error.
svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Global Mirror consistency group. It can also be used to enable write access to the secondary VDisks in the group if the group is in a consistent state. If the consistency group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
334
copied from the primary to the secondary VDisks, which belong to the relationships in the group. For a consistency group in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a consistency group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), then the -access parameter can be used with the svctask stoprcconsistgrp command to enable write access to the secondary VDisks within that group.
svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two virtual disks. It does not affect the virtual disks themselves. If the relationship is disconnected at the time that the command is issued, then the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still wish to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. A relationship cannot be deleted if it is part of a consistency group. You must first remove the relationship from the consistency group. If you delete an inconsistent relationship, the secondary virtual disk becomes accessible even though it is still inconsistent. This is the one case in which Global Mirror does not inhibit access to inconsistent data.
svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Global Mirror consistency group. This command deletes the specified consistency group. You can issue this command for any existing consistency group. If the consistency group is disconnected at the time that the command is issued, then the consistency group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the consistency group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the consistency group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the consistency group is not empty, then the relationships within it are removed from the consistency group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the consistency group.
335
svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of primary and secondary VDisk when a stand-alone relationship is in a consistent state; when issuing the command, the desired primary needs to be specified.
svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of primary and secondary VDisk when a consistency group is in a consistent state. This change is applied to all the relationships in the consistency group, and when issuing the command, the desired primary needs to be specified.
336
Chapter 7.
337
IBM_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0
338
id 0 controller_name ITSO_XIV_01 WWNN 50017380022C0000 mdisk_link_count 10 max_mdisk_link_count 10 degraded no vendor_id IBM product_id_low 2810XIVproduct_id_high LUN-0 product_revision 10.1 ctrl_s/n allow_quorum yes WWPN 50017380022C0170 path_count 2 max_path_count 4 WWPN 50017380022C0180 path_count 2 max_path_count 2 WWPN 50017380022C0190 path_count 4 max_path_count 6 WWPN 50017380022C0182 path_count 4 max_path_count 12 WWPN 50017380022C0192 path_count 4 max_path_count 6 WWPN 50017380022C0172 path_count 4 max_path_count 6
IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name DS4500 controller0 IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller -delim , id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high 0,DS4500,,IBM ,1742-900, 1,DS4700,,IBM ,1814 , FAStT This command renames the controller named controller0 to DS4500. Note: The chcontroller command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word controller, since this prefix is reserved for SVC assignment only.
339
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk To check whether any newly added MDisks were successfully detected, run the svcinfo lsmdisk command and look for new unmanaged MDisks. If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk subsystem, and that the zones are properly set up. Note: If you have assigned a large number of LUNs to your SVC, the discovery process could take a while. Check, several times, using the svcinfo lsmdisk command if all the MDisks you were expecting are present. When all of the disks allocated to the SVC are seen from the SVC cluster a good way to verify which MDisks are unmanaged and ready to be added to MDG follows. Perform the following steps to display managed disks (MDisks): 1. Enter the svcinfo lsmdiskcandidate command, as shown in Example 7-5. This displays all detected MDisks that are not currently part of a managed disk group (MDG).
Example 7-5 svcinfo lsmdiskcandidate command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskcandidate id 0 1 2 . .
Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo lsmdisk command, as shown in Example 7-6.
340
Example 7-6 svcinfo lsmdisk command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID 0,mdisk0,online,unmanaged,,,36.0GB,0000000000000000,controller0,600a0b8000174431000000eb 47139cca00000000000000000000000000000000 1,mdisk1,online,unmanaged,,,36.0GB,0000000000000001,controller0,600a0b8000174431000000ef 47139e1c00000000000000000000000000000000 2,mdisk2,online,unmanaged,,,36.0GB,0000000000000002,controller0,600a0b8000174431000000f1 47139e7200000000000000000000000000000000 3,mdisk3,online,unmanaged,,,36.0GB,0000000000000003,controller0,600a0b8000174431000000e4 4713575400000000000000000000000000000000 4,mdisk4,online,unmanaged,,,36.0GB,0000000000000004,controller0,600a0b8000174431000000e6 4713576000000000000000000000000000000000 5,mdisk5,online,unmanaged,,,36.0GB,0000000000000000,controller1,600a0b800026b28200003ea3 4851577c00000000000000000000000000000000 6,mdisk6,online,unmanaged,,,36.0GB,0000000000000005,controller0,600a0b8000174431000000e7 47139cb600000000000000000000000000000000 7,mdisk7,online,unmanaged,,,36.0GB,0000000000000001,controller1,600a0b80002904de00004188 485157a400000000000000000000000000000000 8,mdisk8,online,unmanaged,,,36.0GB,0000000000000006,controller0,600a0b8000174431000000ea 47139cc400000000000000000000000000000000
From this output, you can see additional information about each MDisk (such as current status). For the purpose of our current task, we are only interested in the unmanaged disks because they are candidates for MDGs (all MDisks in our case). Tip: The -delim, parameter collapses output instead of wrapping text over multiple lines. 2. If not all the MDisks that you expected are visible, rescan the available Fibre Channel network by entering the svctask detectmdisk command, as in Example 7-7.
Example 7-7 svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem have been properly assigned to the SVC and that appropriate zoning is in place (for example, the SVC can see the disk subsystem). See Chapter 3, Planning and configuration on page 63 for details about how to set up your SAN fabric.
341
id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004 86a6600000ae94a89575900000000000000000000000000000000 1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000 000e134a895d6e00000000000000000000000000000000 2,mdisk2,online,managed,0,MDG_DS47,16.0GB,0000000000000002,controller0,600a0b80004 858a000000e144a895d9400000000000000000000000000000000 3,mdisk3,online,managed,0,MDG_DS47,16.0GB,0000000000000003,controller0,600a0b80004 858a000000e154a895db000000000000000000000000000000000 Example 7-9 shows a summary for a single MDisk.
Example 7-9 Usage of the command svcinfo lsmdisk (id).
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 2 id 2 name mdisk2 status online mode managed mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 16.0GB quorum_index 0 block_size 512 controller_name controller0 ctrl_type 4 ctrl_WWNN 200600A0B84858A0 controller_id 0 path_count 2 max_path_count 2 ctrl_LUN_# 0000000000000002 UID 600a0b80004858a000000e144a895d9400000000000000000000000000000000 preferred_WWPN 200600A0B84858A2 active_WWPN 200600A0B84858A2
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdisk_6 mdisk6 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 6,mdisk_6,online,managed,0,MDG_DS45,36.0GB,0000000000000005,DS4500,600a0b800017443 1000000e747139cb600000000000000000000000000000000
342
This command renamed the MDisk named mdisk6 to mdisk_6. Note: The chmdisk command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word mdisk, since this prefix is reserved for SVC assignment only.
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431 000000ea47139cc400000000000000000000000000000000 9,mdisk9,excluded,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b2 8200003ed6485157b600000000000000000000000000000000 After taking the necessary corrective action to repair the MDisk (for example, replace the failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing the svctask includemdisk command (Example 7-12) since the SVC cluster does not include the MDisk automatically.
Example 7-12 svctask includemdisk
IBM_2145:ITSO-CLS1:admin>svctask includemdisk mdisk9 Running the svcinfo lsmdisk command again should show mdisk9 online again, as shown in Example 7-13.
Example 7-13 svcinfo lsmdisk command: Verifying that MDisk is included
343
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk mdisk6 MDG_DS45 You can only add unmanaged MDisks to an MDG. This command adds MDisk mdisk6 to the MDG named MDG_DS45. Important: Do not do this if you want to create an image mode VDisk from the MDisk you are adding. As soon as you add an MDisk to an MDG, it becomes managed, and extent mapping is not necessarily one to one anymore.
IBM_2145:ITSOSVC42A:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG2 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID 6:mdisk6:online:managed:2:MDG2:3.0GB:0000000000000006:DS4000:600a0b800017423300000 044465c0a2700000000000000000000000000000000 7:mdisk7:online:managed:2:MDG2:6.0GB:0000000000000007:DS4000:600a0b800017443100000 06f465bf93200000000000000000000000000000000 21:mdisk21:online:image:2:MDG2:2.0GB:0000000000000015:DS4000:600a0b800017443100000 0874664018600000000000000000000000000000000
344
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 MDisk Group, id [0], successfully created This command creates an MDG called MDG_DS47. The extent size used within this group is 512 MB, which is the most commonly used extent size. We have not added any MDisks to the MDisk group yet, so it is an empty MDG. There is a way to add unmanaged MDisks and create the MDG in the same command. By using the command svctask mkmdiskgrp with the parameter -mdisk and entering the id or names of the MDisks will add the MDisk immediately after the MDG is created. So, prior to the creation of the MDG you enter the command svcinfo lsmdisk shown in Example 7-18 where we list all available MDisks seen by the SVC cluster.
Example 7-18 Listing available MDisks.
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 0,mdisk0,online,unmanaged,,,16.0GB,0000000000000000,controller0,600a0b8000486a6600 000ae94a89575900000000000000000000000000000000 1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000 000e134a895d6e00000000000000000000000000000000 2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004 858a000000e144a895d9400000000000000000000000000000000 3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004 858a000000e154a895db000000000000000000000000000000000 Using the same command as before (svctask mkmdiskgrp) and knowing the MDisk ids we are using, we can add multiple MDisks to the MDG at the same time. We will now add the unmanaged MDisks shown in Example 7-18 on page 345 to the MDG we created, shown in Example 7-19.
Example 7-19 Creating an MDiskgrp and adding available MDisks.
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 -mdisk 0:1 MDisk Group, id [0], successfully created This command creates an MDG called MDG_DS47. The extent size used within this group is 512 MB, and two MDisks (0 and 1) are added to the group.
345
Note: The -name and -mdisk parameters are optional. If you do not enter a -name, the default is MDiskgrpX, where X is the ID sequence number assigned by the SVC internally. If you do not enter the -mdisk parameter, an empty MDG is created. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. It can be between one and 15 characters in length, but it cannot start with a number or the word mDiskgrp because this prefix is reserved for SVC assignment only. By running the svcinfo lsmdisk command, you should now see the MDisks as managed and part of the MDG_DS47, as shown in Example 7-20.
Example 7-20 svcinfo lsmdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004 86a6600000ae94a89575900000000000000000000000000000000 1,mdisk1,online,managed,0,MDG_DS47,16.0GB,0000000000000001,controller0,600a0b80004 858a000000e134a895d6e00000000000000000000000000000000 2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004 858a000000e144a895d9400000000000000000000000000000000 3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004 858a000000e154a895db000000000000000000000000000000000 You have now completed the tasks required to create an MDG.
IBM_2145:ITSO-CLS1:admin>svctask chmdiskgrp -name MDG_DS81 MDG_DS83 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim , id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_ capacity,used_capacity,real_capacity,overallocation,warning 0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0 1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0
346
2,MDG_DS81,online,0,0,0,512,0,0.00MB,0.00MB,0.00MB,0,85 This command renamed the MDG from MDG_DS83 to MDG_DS81. Note: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word mdiskgrp, since this prefix is reserved for SVC assignment only.
IBM_2145:ITSO-CLS1:admin>svctask rmmdiskgrp MDG_DS81 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim , id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_ capacity,used_capacity,real_capacity,overallocation,warning 0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0 1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0 This command removes MDG_DS81 from the SVC configuration. Note: If there are MDisks within the MDG, you must use the -force flag, for example: svctask rmmdiskgrp MDG_DS81 -force Ensure that you really want to use this flag, as it destroys all mapping information and data held on the VDisks, which cannot be recovered.
IBM_2145:ITSO-CLS1:admin>svctask rmmdisk -mdisk 6 -force MDG_DS45 This command removes the MDisk called mdisk6 from the MDG named MDG_DS45.The -force flag is set because there are VDisks using this MDG. Note: The removal only takes place if there is sufficient space to migrate the VDisk data to other extents on other MDisks that remain in the MDG. After you remove the MDisk group, it takes some time to change the mode from managed to unmanaged.
347
When we are creating a host in our SVC cluster we need to define the connection method. Starting with SVC 5.1 we can now define our host as iSCSI attached or Fibre Channel attached, and these connection methods are described in detail in Chapter 2, IBM System Storage SAN Volume Controller overview on page 7.
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA After you know that the WWPNs that are displayed match your host (use host or SAN switch utilities to verify), use the svctask mkhost command to create your host. Note: If you do not provide the -name parameter, the SVC automatically generates the name hostX (where X is the ID sequence number assigned by the SVC internally). You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word host, since this prefix is reserved for SVC assignment only. The command to create a host is shown in Example 7-26.
Example 7-26 svctask mkhost
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B89C1CD:210000E08B054CAA Host, id [0], successfully created This command creates a host called Palau using WWPN 21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA Note: You can define from one up to eight ports per host or you can use the addport command, which we show in 7.3.5, Adding ports to a defined host on page 352.
348
In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create the host regardless if they are connected or not, as shown in Example 7-27.
Example 7-27 mkhost -force
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Guinea -hbawwpn 210000E08B89C1DC -force Host, id [4], successfully created This command forces the creation of a host called Guinea using WWPN 210000E08B89C1DC. Note: WWPNs are not case sensitive in the CLI. If you run the svcinfo lshost command again, you should now see your host Guinea under host id 4.
349
We create the host by issuing the command mkhost as shown in Example 7-28 and when the command completes successfully we display our newly created host. It is important to know that when the host is initially configured the default authentication method is set no authentication and no CHAP secret is set. To set a CHAP secret for authenticating the iSCSI host with the SAN Volume Controller cluster, use the svctask chhost command with the chapsecret parameter.
Example 7-28 mkhost command
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Baldur -iogrp 0 -iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com Host, id [4], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 0 state offline We have now created our host definition and we will now map a VDisk to our new iSCSI server as shown in Example 7-29. We have already created the VDisk as shown in 7.4.1, Creating a VDisk on page 354. In our scenario our VDisk has id 21 and the hostname is Baldur and now we map it to our iSCSI host.
Example 7-29 Mapping VDisk to iSCSI host
350
Virtual Disk to Host map, id [0], successfully created After the VDisk has been mapped to the host we display the host information again as shown in Example 7-30.
Example 7-30 svcinfo lshost
IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 1 state online Note: Fibre Channel and iSCSI hosts are handled in the same operational way once they have been created. If you need to display a CHAP secret for a already defined server you can do that with the command svcinfo lsiscsiauth.
IBM_2145:ITSO-CLS1:admin>svctask chhost -name Angola Guinea IBM_2145:ITSO-CLS1:admin>svcinfo lshost id name port_count 0 Palau 2 1 Nile 2 2 Kanaga 2 3 Siam 2 4 Angola 1
iogrp_count 4 1 1 2 4
This command renamed the host from Guinea to Angola. Note: The chhost command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word host, since this prefix is reserved for SVC assignment only.
Note: If you are using HP-UX there is a -type option to use, see the IBM System Storage SAN Volume Controller Host Attachment Guide for more information on the hosts that require the -type parameter
351
IBM_2145:ITSO-CLS1:admin>svctask rmhost Angola Note: If there are any VDisks assigned to the host, you must use the -force flag, for example: svctask rmhost -force Angola
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B054CAA If the WWPN matches your information (use host or SAN switch utilities to verify), use the svctask addhostport command to add the port to the host. The command to add a host port is shown in Example 7-34.
Example 7-34 svctask addhostport
IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA Palau This command adds the WWPN of 210000E08B054CAA to the host Palau. Note: You can add multiple ports all at once by using the separator (:) between WWPNs, for example: svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau If the new HBA is not connected or zoned the svcinfo lshbaportcandidate command will not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the host regardless, as shown in Example 7-35.
Example 7-35 svctask addhostport
352
This command forces the addition of the WWPN 210000E08B054CAA to the host called Palau. Note: WWPNs are one of the few things within the CLI that are not case sensitive. If you run the svcinfo lshost command again, you should see your host with an updated port count 2 in Example 7-36.
Example 7-36 svcinfo lshost command: port count
iogrp_count 4 4 1 1 1
If your host is currently using iSCSI as a connection method you need to have the new iSCSI IQN id before you add the port. It is not possible to check for available candidates as we can with the Fibre Channel attached hosts. When you have acquired the additional iSCSI IQN you use the command as svctask addhostport as shown in Example 7-37.
Example 7-37 adding iSCSI port to a already configured host
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B054CAA node_logged_in_count 2 state active WWPN 210000E08B89C1CD node_logged_in_count 2 state offline
353
When you know the WWPN or iSCSI IQN, use the svctask rmhostport command to delete a host port as shown in Example 7-39.
Example 7-39 svctask rmhostport
For removing WWPN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -hbawwpn 210000E08B89C1CD Palau and for removing iSCSI IQN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur Baldur This command removes the WWPN of 210000E08B89C1CD from host Palau and the iSCSI IQN iqn.1991-05.com.microsoft:baldur from the host Baldur. Note: You can remove multiple ports at a time by using the separator (:) between the port names, for example: svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
354
Example 7-40 svctask mkvdisk commands IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS47 -iogrp io_grp0 -size 10 -unit gb -name Tiger Virtual Disk, id [0], successfully created
To verify the results you can use the command svcinfo lsvdisk command, as shown in Example 7-41 on page 355.
Example 7-41 svcinfo lsvdisk command
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 0 id 0 name Tiger IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F1000000000000000 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00MB real_capacity 10.00MB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize
355
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS45 -iogrp 1 -vtype striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32 Virtual Disk, id [7], successfully created This command creates a space-efficient 10 GB VDisk. The VDisk belongs to the mdiskgrp with the name MDG_DS45 and is owned by the I/O group io_grp1. The real_capacity will automatically expand until the VDisk size of 10 GB is reached. The grainsize is set to 32 K which is default.
356
Note: When using the -rsize parameter you have the following options; disk_size, disk_size_percentage and auto. Specify the disk_size_percentage value using an integer, or an integer immediately followed by the percent character (%). Specify the units for a disk_size integer using the -unit parameter; the default is MB. The -rsize value can be greater than, equal to, or less than the size of the VDisk. The auto option creates a VDisk copy that uses the entire size of the MDisk; if you specify the -rsize auto option, you must also specify the -vtype image option. An entry of 1 GB uses 1024 MB.
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Image -iogrp 0 -mdisk mdisk20 -vtype image -name Image_Vdisk_A Virtual Disk, id [8], successfully created This command creates an image mode VDisk called Image_Vdisk_A using MDisk mdisk20. The VDisk belongs to the MDG MDG_Image and is owned by the I/O group io_grp0.
357
If we run the svcinfo lsmdisk command again, notice that mdisk20 now has a status of image, as shown in Example 7-45.
Example 7-45 svcinfo lsmdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C id 2 name vdisk_C IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS47 capacity 45.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name
358
RC_id RC_name vdisk_UID 60050768018301BF2800000000000002 virtual_disk_throttling (MB) 20 preferred_node_id 3 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 12.00GB free_capacity 12.00GB overallocation 375 autoexpand off warning 23 grainsize 32 In Example 7-47, the VDisk copy mirror is being added using the svctask addvdiskcopy command.
Example 7-47 svctask addvdiskcopy
IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp MDG_DS45 -vtype striped -rsize 20 -autoexpand -grainsize 64 -unit gb vdisk_C Vdisk [2] copy [1] successfully created During the synchronization process, the status can be seen using the command svcinfo lsvdisksyncprogress. As shown in Example 7-48, the first time the status has been checked the synchronization progress was at 86%, and the estimated completion time was 19:16:54. The second time the command is performed, the progress status is at 100%, and the synchronization has completed.
Example 7-48 Synchronization
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C vdisk_id vdisk_name copy_id progress estimated_completion_time 2 vdisk_C 1 86 080710191654 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C vdisk_id vdisk_name copy_id progress estimated_completion_time
359
vdisk_C
100
As you can see in Example 7-49, the new VDisk copy mirror (copy_id 1) has been added and can be seen using the svcinfo lsvdisk command.
Example 7-49 svcinfo lsvidsk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C id 2 name vdisk_C IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id many mdisk_grp_name many capacity 45.0GB type many formatted no mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000002 virtual_disk_throttling (MB) 20 preferred_node_id 3 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 12.00GB free_capacity 12.00GB overallocation 375 autoexpand off warning 23 grainsize 32 copy_id 1 status online sync yes primary no 360
SAN Volume Controller V5.1
mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.44MB real_capacity 20.02GB free_capacity 20.02GB overallocation 224 autoexpand on warning 80 grainsize 64 Notice that the VDisk copy mirror (copy_id 1) does not have the same values as the VDisk copy. While adding a VDisk copy mirror, you are able to define a mirror with different parameters than the VDisk copy. This means that you can define a space-efficient VDisk copy mirror for a non-space-efficient VDisk copy and vice-versa. This is one of the ways to migrate a non-space-efficient VDisk to a space-efficient VDisk Note: To change the parameters of a VDisk copy mirror, it must be deleted and redefined with the new values.
IBM_2145:ITSO-CLS1:admin>svctask splitvdiskcopy -copy 1 -iogrp 0 -name vdisk_N vdisk_B Virtual Disk, id [2], successfully created As you can see in Example 7-51, the new VDisk, vdisk_N, has been created as an independent VDisk.
Example 7-51 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_N id 2 name vdisk_N IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 100.0GB
361
type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF280000000000002F throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 84.75MB real_capacity 20.10GB free_capacity 20.01GB overallocation 497 autoexpand on warning 80 grainsize 64 The VDisk copy of VDisk vdisk_B has now lost its mirror. Therefore, a new VDisk has been created.
362
Note: If the VDisk has a mapping to any hosts, it is not possible to move the VDisk to an I/O group that does not include any of those hosts. This operation will fail if there is not enough space to allocate bitmaps for a mirrored VDisk in the target IO group. If the -force parameter is used and the cluster is unable to destage all write data from the cache, the contents of the VDisk are corrupted by the loss of the cached data. If the -force parameter is used to move a VDisk that has out-of-sync copies, a full re-synchronization is required.
IBM_2145:ITSO-CLS1:admin>svctask chvdisk -rate 20 -unitmb vdisk_C IBM_2145:ITSO-CLS1:admin>svctask chvdisk -warning 85% vdisk7 Note: The chvdisk command specifies the new name first. The name can consist of letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, the dash, or the word vdisk, because this prefix is reserved for SVC assignment only. The first command changes the VDisk throttling of vdisk7 to 20 MBps, while the second command changes the SEV warning to 85%. If you want to verify the changes, issue the svcinfo lsvdisk command, as shown in Example 7-53.
Example 7-53 svcinfo lsvdisk command: verifying throttling
363
id 7 name vdisk7 IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 10.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF280000000000000A virtual_disk_throttling (MB) 20 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 5.02GB free_capacity 5.02GB overallocation 199 autoexpand on warning 85 grainsize 32
364
If the VDisk is currently the subject of a migrate to image mode, then the delete will fail unless the -force flag is specified. This flag will halt the migration and then delete the VDisk. If the command succeeds (without the -force flag) for an image mode disk, then the underlying back-end controller logical unit will be consistent with the data that a host could previously have read from the Image Mode Virtual Disk; that is, all fast write data will have been flushed to the underlying LUN. If the -force flag is used, then this guarantee does not hold. If there is any un-destaged data in the fast write cache for this VDisk, then the deletion of the VDisk will fail unless the -force flag is specified. Now any un-destaged data in the fast write cache will be deleted. Use the svctask rmvdisk command to delete a VDisk from your SVC configuration, as shown in Example 7-54.
Example 7-54 svctask rmvdisk
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk vdisk_A This command deletes VDisk vdisk_A from the SVC configuration. If the VDisk is assigned to a host, you need to use the -force flag to delete the VDisk (Example 7-55).
Example 7-55 svctask rmvdisk (force)
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb vdisk_C This command expands the vdisk_C, that was 35 GB before, by another 5 GB to give a total of 40 GB. To expand a space-efficient VDisk, you can use the -rsize option, as shown in Example 7-57. This command changes the real size of VDisk vdisk_B to real capacity of 55 GB. The capacity of the VDisk remains unchanged.
Example 7-57 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B id 1 name vdisk_B capacity 100.0GB mdisk_name fast_write_state empty used_capacity 0.41MB
Chapter 7. SVC operations using the CLI
365
real_capacity 50.00GB free_capacity 50.00GB overallocation 200 autoexpand off warning 40 grainsize 32 IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -rsize 5 -unit gb vdisk_B IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B id 1 name vdisk_B capacity 100.0GB mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 55.00GB free_capacity 55.00GB overallocation 181 autoexpand off warning 40 grainsize 32 Important: If a VDisk is expanded, its type will become striped even if it was previously sequential or in image mode. If there are not enough extents to expand your VDisk to the specified size, you will receive the following error message: CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
Virtual Disk to Host map, id [2], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_C Virtual Disk to Host map, id [1], successfully created This command assigns vdisk_B and vdisk_C to host Tiger as shown in Example 7-59.
Example 7-59 svcinfo lshostvdiskmap -delim,
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 1,Tiger,2,1,vdisk_B,210000E08B892BCD,60050768018301BF2800000000000001 1,Tiger,1,2,vdisk_C,210000E08B892BCD,60050768018301BF2800000000000002 Note: The optional parameter -scsi scsi_num can help assign a specific LUN ID to a VDisk that is to be associated with a given host. The default (if nothing is specified) is to increment based on what is already assigned to the host. It is worth noting that some HBA device drivers will stop when they find a gap in the SCSI LUN IDs. For example: Virtual Disk 1 is mapped to Host 1 with SCSI LUN ID 1. Virtual Disk 2 is mapped to Host 1 with SCSI LUN ID 2. Virtual Disk 3 is mapped to Host 1 with SCSI LUN ID 4. When the device driver scans the HBA, it might stop after discovering Virtual Disks 1 and 2, because there is no SCSI LUN mapped with ID 3. Care should therefore be taken to ensure that the SCSI LUN ID allocation is contiguous. It is not possible to map a virtual disk to a host more than once at different LUN numbers (Example 7-60).
Example 7-60 svctask mkvdiskhostmap
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Siam vdisk_A Virtual Disk to Host map, id [0], successfully created This command maps the VDisk called vdisk_A to the host called Siam. You have now completed all the tasks required to assign a VDisk to an attached host.
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,0,vdisk_A,210000E08B18FF8A,60050768018301BF280000000000000C From this command, you can see that the host Siam has only one VDisk called vdisk_A assigned. The SCSI LUN ID is also shown. This is the ID by which the Virtual Disk is being presented to the host. If no host is specified, all defined host to VDisk mappings will be returned.
367
Note: Although the -delim, flag normally comes at the end of the command string, in this case you must specify this flag before the host name. Otherwise, it returns the following message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or incorrect argument sequence has been detected. Ensure that the input is as per the help.
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Tiger vdisk_D This command unmaps the VDisk called vdisk_D from the host Tiger.
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -mdiskgrp MDG_DS47 -vdisk vdisk_C This command moves vdisk_C to MDG_DS47. Note: If insufficient extents are available within your target MDG, you receive an error message. Make sure the source and target MDisk group have the same extent size. The optional threads parameter allows you to assign a priority to the migration process. The default is 4, which is the highest priority setting. However, if you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1.
368
You can run the svcinfo lsmigrate command at any time to see the status of the migration process. This is shown in Example 7-64.
Example 7-64 svcinfo lsmigrate command
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 12 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 16 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 Note: The progress is given as percent complete. If you get no more replies, then the process has finished.
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk vdisk_A -mdisk mdisk8 -mdiskgrp MDG_Image In this example, you migrate the data from vdisk_A onto mdisk8 and the MDisk must be put into the MDisk group MDG_Image.
369
volume below its used size. All capacities, including changes, must be in multiples of 512 bytes. An entire extent is reserved even if it is only partially used. The default capacity units are MB. The command can be used to shrink the physical capacity that is allocated to a particular VDisk by the specified amount. The command can also be used to shrink the virtual capacity of a space-efficient VDisk without altering the physical capacity assigned to the VDisk. For a non-space-efficient VDisk, use the -size parameter. For a space-efficient VDisk real capacity, use the -rsize parameter. For the space-efficient VDisk virtual capacity, use the -size parameter. When the virtual capacity of a space-efficient VDisk is changed, the warning threshold is automatically scaled to match. The new threshold is stored as a percentage. The cluster arbitrarily reduces the capacity of the VDisk by removing a partial, one or more extents from those allocated to the VDisk. You cannot control which extents are removed and so you cannot assume that it is unused space that is removed. Note: Image Mode Virtual Disks cannot be reduced in size. They must first be migrated to Managed Mode. To run the shrinkvdisksize command on a mirrored VDisk, all copies of the VDisk must be synchronized.
Attention: 1. If the virtual disk contains data, do not shrink the disk. 2. Some operating systems or file systems use what they consider to be the outer edge of the disk for performance reasons. This command can shrink FlashCopy target virtual disks to the same capacity as the source. 3. Before you shrink a VDisk, validate that the VDisk is not mapped to any host objects. If the VDisk is mapped, data is displayed. You can determine the exact capacity of the source or master VDisk by issuing the svcinfo lsvdisk -bytes vdiskname command. Shrink the VDisk by the required amount by issuing the svctask shrinkvdisksize -size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command. Assuming your operating system supports it, you can use the svctask shrinkvdisksize command to decrease the capacity of a given VDisk. An example of this command is shown in Example 7-66.
Example 7-66 svctask shrinkvdisksize
IBM_2145:ITSO-CLS1:admin>svctask shrinkvdisksize -size 44 -unit gb vdisk_A This command shrinks a volume called Vdisk_A from a previously total size of 80 GB, by 44 GB to a new total size of 36 GB.
370
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskmember mdisk1 id copy_id 0 0 2 0 3 0 4 0 5 0 This command displays the list of all VDisk IDs that correspond to the VDisk copies that are using mdisk1. To correlate the IDs displayed in this output to VDisk names, we can run the svcinfo lsvdisk command, which we discuss in more detail in Chapter 7.4, Working with VDisks on page 354.
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG_DS47 -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 5,mdisk5,online,managed,1,MDG_DS47,36.0GB,0000000000000000,DS4700,600a0b800026b282 00003ea34851577c00000000000000000000000000000000 7,mdisk7,online,managed,1,MDG_DS47,36.0GB,0000000000000001,DS4700,600a0b80002904de 00004188485157a400000000000000000000000000000000 9,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b282 00003ed6485157b600000000000000000000000000000000 12,mdisk12,online,managed,1,MDG_DS47,36.0GB,0000000000000003,DS4700,600a0b80002904 de000041ba485157d000000000000000000000000000000000 14,mdisk14,online,managed,1,MDG_DS47,36.0GB,0000000000000004,DS4700,600a0b800026b2 8200003f6c4851585200000000000000000000000000000000 18,mdisk18,online,managed,1,MDG_DS47,36.0GB,0000000000000005,DS4700,600a0b80002904 de000042504851586800000000000000000000000000000000 19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b2 8200003f9f4851588700000000000000000000000000000000 20,mdisk20,online,managed,1,MDG_DS47,36.0GB,0000000000000007,DS4700,600a0b80002904 de00004282485158aa00000000000000000000000000000000
7.4.19 Showing from what MDisks the VDisk has its extents
Use the svcinfo lsvdiskmember command, as shown in Example 7-69, to show which MDisks are used by a specific VDisk.
Example 7-69 svcinfo lsvdiskmember command
371
1 2 3 4 6 10 11 13 15 16 17 If you want to know more about these MDisks, you can run the svcinfo lsmdisk command, as explained in 7.2, Working with managed disks and disk controller systems on page 338 (using the ID displayed above rather than the name).
7.4.20 Showing from what MDisk group a VDisk has its extents
Use the svcinfo lsvdisk command, as shown in Example 7-70, to show which MDG a specific VDisk belongs to.
Example 7-70 svcinfo lsvdisk command: MDG name
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_D id 3 name vdisk_D IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 80.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000003 throttling 0 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0
372
mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 80.00GB real_capacity 80.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize If you want to know more about these MDGs, you can run the svcinfo lsmdiskgrp command, as explained in 7.2.11, Working with managed disk groups (MDG) on page 344.
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap -delim , vdisk_B id,name,SCSI_id,host_id,host_name,wwpn,vdisk_UID 1,vdisk_B,2,1,Nile,210000E08B892BCD,60050768018301BF2800000000000001 1,vdisk_B,2,1,Nile,210000E08B89B8C0,60050768018301BF2800000000000001 This command shows the host or hosts to which the VDisk vdisk_B was mapped. It is normal for you to see duplicated entries, as there are more paths between the cluster and the host. To be sure that the operating system on the host sees the disk only one time, you must install and configure a multipath software application such as the IBM Subsystem Driver (SDD). Note: Although the optional -delim, flag normally comes at the end of the command string, in this case you must specify this flag before the VDisk name. Otherwise, the command does not return any data.
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006 This command shows which VDisks are mapped to the host called Siam. Note: Although the optional -delim, flag normally comes at the end of the command string, in this case you must specify this flag before the VDisk name. Otherwise, the command does not return any data.
Chapter 7. SVC operations using the CLI
373
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0 1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000004 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0 1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0 1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 Note: In Example 7-73, the state of each path is OPEN. Sometimes you will find the state CLOSED, and this does not necessarily indicate any kind of problem, as it might be due to the stage of processing that the path is in. 2. Run the svcinfo lshostvdiskmap command to return a list of all assigned VDisks (Example 7-74).
Example 7-74 svcinfo lshostvdiskmap IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006
Look for the disk serial number that matches your datapath query device output. This host was defined in our SVC as Siam. 3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make up the specified VDisk (Example 7-75).
Example 7-75 svcinfo lsvdiskmember IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember MM_DBLog_Pri id
374
0 1 2 3 4 10 11 13 15 16 17
4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN number information, as shown in Example 7-76. The output displays the controller name and the controller LUN ID, which should be enough (provided you named your controller something unique, such as a serial number) to track back to a LUN within the disk subsystem (Example 7-76).
Example 7-76 svcinfo lsmdisk command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 3 id 3 name mdisk3 status online mode managed mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 36.0GB quorum_index block_size 512 controller_name DS4500 ctrl_type 4 ctrl_WWNN 200400A0B8174431 controller_id 0 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000003 UID 600a0b8000174431000000e44713575400000000000000000000000000000000 preferred_WWPN 200400A0B8174433 active_WWPN 200400A0B8174433
375
We recommend that in large SAN environments, where scripting with svctask commands is used, that it is kept as simple as possible since fallback, documentation and verifying of successful script prior to execution is harder to accomplish.
376
Filtering
To reduce the output that is displayed by an svcinfo command, you can specify a number of filters, depending on which svcinfo command you are running. To see which filters are available, type the command followed by the -filtervalue? flag, as shown in Example 7-77.
Example 7-77 svcinfo lsvdisk -filtervalue? command
377
id IO_group_id IO_group_name status mdisk_grp_name mdisk_grp_id capacity type FC_id FC_name RC_id RC_name vdisk_name vdisk_id vdisk_UID fc_map_count copy_count
When you know the filters, you can be more selective in generating output: Multiple filters can be combined to create specific searches. You can use an * as a wildcard when using names. When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb. For example, if we issue the svcinfo lsvdisk command with no filters, we see the output shown in Example 7-78 on page 378.
Example 7-78 svcinfo lsvdisk command: no filters
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,typ e,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count 0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000 000000,0,1 1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000 0000001,0,1 2,vdisk2,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000 000002,0,1 3,vdisk3,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000 000003,0,1 Tip: The -delim : parameter truncates the on screen content and separates data fields with colons as opposed to wrapping text over multiple lines. That is normally used in case you need to grab some reports during script execution. If we now add a filter to our svcinfo command (such as FC_name), we can reduce the output, as shown in Example 7-79.
Example 7-79 svcinfo lsvdisk command: with filter
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=*7 -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count 0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000 000000,0,1 1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000 0000001,0,1 378
SAN Volume Controller V5.1
The first command shows all Virtual Disks (VDisks) with the IO_group_id=0. The second command shows us all VDisks where the mdisk_grp_name ends with a 7. The wildcard * can be used when names are used.
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote 0000020063E03A38 0000020061006FCA ITSO-CLS2 remote 0000020061006FCA
partnership
bandwidth
fully_configured 20 fully_configured 50
Attention: Changing the speed on a running cluster breaks I/O service to the attached hosts. Before changing the fabric speed, stop I/O from active hosts and force these hosts to flush any cached data by un-mounting volumes (for UNIX host types) or by removing drive letters (for Windows host types). Some hosts might need to be rebooted to detect the new fabric speed. The fabric speed setting applies only to the 4F2 and 8F2 model nodes in a cluster. The 8F4 nodes automatically negotiate the fabric speed on a per-port basis.
379
IBM_2145:ITSO-CLS1:admin>svctask chcluster -servicepwd Enter a value for -password : Enter password: Confirm password: IBM_2145:ITSO-CLS1:admin> More information about managing users is in Managing users using the CLI on page 393.
380
iSCSI authentication or CHAP can be done in two ways, either for the whole cluster itself or per host connection. If the CHAP is to be configured for the whole cluster this is shown in Example 7-82 on page 381.
Example 7-82 setting a CHAP secret for the entire cluster to passw0rd
IBM_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret passw0rd IBM_2145:ITSO-CLS1:admin> In our scenario we have our cluster ip of 9.64.210.64 and that will not be impacted during our configuration of the nodes ip addresses. We start by listing our ports using the svcinfo lsportip command. We can see that we have two ports per node to work with. Both ports can have two ip addresses that can be used for iSCSI. In our example we will configure the secondary port in both nodes in our I/O group. This is shown in Example 7-83.
Example 7-83 Configuring secondary ethernet port on SVC nodes
If we want this iSCSI port to failover to the other node in the I/O group in case of failure, we need to configure that by using the command svctask chnode with the parameter -failover and use the name or iSCSI alias to identify the node we are assigning as the iSCSI failover node. To display the iSCSI IQN svcinfo lsnode should be issued as shown in Example 7-84.
Example 7-84 svcinfo lsnode command.
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode 1 id 1 name node1 UPS_serial_number 100068A006 WWNN 50050768010027E2 status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node no UPS_unique_id 2040000188440006 port_id 50050768014027E2 port_status active port_speed 4Gb port_id 50050768013027E2 port_status active port_speed 4Gb port_id 50050768011027E2 port_status active port_speed 4Gb port_id 50050768012027E2 port_status active
Chapter 7. SVC operations using the CLI
381
port_speed 4Gb hardware 8G4 iscsi_name iqn.1986-03.com.ibm:2145.ITSO-CLS1.node1 iscsi_alias failover_active no failover_name failover_iscsi_name failover_iscsi_alias Each node has a unique IQN that applies to both ports of that node and to activate the failover we issue the command svctask chnode -failover with the node name or the iSCSI alias of the node we want to assign as our failover node as shown in Example 7-85.
Example 7-85 Setting failover node
IBM_2145:ITSO-CLS1:admin>svctask chnode -failover -name node2 1 And now when we show the svcinfo lsnode again we can see that node 2 is our failover node for node 1 as show in Example 7-86.
Example 7-86 Failover node set IBM_2145:ITSO-CLS1:admin>svcinfo lsnode 1 id 1 name node1 UPS_serial_number 100068A006 WWNN 50050768010027E2 status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node no UPS_unique_id 2040000188440006 port_id 50050768014027E2 port_status active port_speed 4Gb port_id 50050768013027E2 port_status active port_speed 4Gb port_id 50050768011027E2 port_status active port_speed 4Gb port_id 50050768012027E2 port_status active port_speed 4Gb hardware 8G4
iscsi_name iqn.1986-03.com.ibm:2145.ITSO-CLS1.node1
iscsi_alias RedNodeofITSOcluster failover_active no failover_name node2 failover_iscsi_name iqn.1986-03.com.ibm:2145.ITSO-CLS1.node2 failover_iscsi_alias NodeBelowRedNode IBM_2145:ITSO-CLS1:admin>
382
IBM_2145:ITSO-CLS1:admin>svctask chclusterip -clusterip 10.20.133.5 -gw 10.20.135.1 -mask 255.255.255.0 -port 1 This command changes the current IP address of the cluster to 10.20.133.5. Important: If you specify a new cluster IP address, the existing communication with the cluster through the CLI is broken and the PuTTY application automatically closes. You must relaunch the PuTTY application and point to the new IP address, but your SSH key will still work.
We have now completed the tasks required to change the IP addresses (cluster and service) of the SVC environment.
383
Note: If you have changed the time zone, you must clear the error log dump directory before you can view the error log through the Web application.
2. To find the time zone code that is associated with your time zone, enter the svcinfo lstimezones command, as shown in Example 7-89. A truncated list is provided for this example. If this setting is correct (for example, 522 UTC), you can go to Step 4. If not, continue with Step 3.
Example 7-89 svcinfo lstimezones IBM_2145:ITSO-CLS1:admin>svcinfo lstimezones id timezone . . 507 Turkey 508 UCT 509 Universal 510 US/Alaska 511 US/Aleutian 512 US/Arizona 513 US/Central 514 US/Eastern 515 US/East-Indiana 516 US/Hawaii 517 US/Indiana-Starke 518 US/Michigan 519 US/Mountain 520 US/Pacific 521 US/Samoa 522 UTC . .
3. Now that you know which time zone code is correct for you, set the time zone by issuing the svctask settimezone (Example 7-90) command.
Example 7-90 svctask settimezone IBM_2145:ITSO-CLS1:admin>svctask settimezone -timezone 520
4. Set the cluster time by issuing the svctask setclustertime command (Example 7-91).
Example 7-91 svctask setclustertime IBM_2145:ITSO-CLS1:admin>svctask setclustertime -time 061718402008
384
You have now completed the tasks necessary to set the cluster time zone and time.
IBM_2145:ITSO-CLS1:admin>svctask startstats -interval 15 The interval we specify (minimum 1, maximum 60) is in minutes. This command starts statistics collection and gathers data at 15 minute intervals. Note: To verify that statistics collection is set, display the cluster properties again, as shown in Example 7-93.
Example 7-93 Statistics collection status and frequency
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status on statistics_frequency 15 --please note that the output has been shortened for easier reading. -We have now completed the tasks required to start statistics collection on the cluster.
IBM_2145:ITSO-CLS1:admin>svctask stopstats This command stops the statistics collection. Do not expect any prompt message from this command. To verify that the statistics collection is stopped, display the cluster properties again, as shown in Example 7-95.
Example 7-95 Statistics collection status and frequency
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status off statistics_frequency 15 --please note that the output has been shortened for easier reading. -Notice that the interval parameter is not changed but the status is off. We have now completed the tasks required to stop statistics collection on our cluster.
385
This command shuts down the SVC cluster. All data is flushed to disk before the power is removed. At this point you lose administrative contact with your cluster, and the PuTTY application automatically closes. 2. You will be presented with the following message: Warning: Are you sure that you want to continue with the shut down? Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy) relationships, data migration operations, and forced deletions before continuing. Entering y to this will execute the command. No feedback is then displayed. Entering anything
386
other than y(es) or Y(ES) will result in the command not executing. No feedback is displayed. Important: Before shutting down a cluster, ensure all I/O operations are stopped that are destined for this cluster because you will lose all access to all VDisks being provided by this cluster. Failure to do so can result in failed I/O operations being reported to the host operating systems. There is no need to do this when you shut down a node. Begin the process of quiescing all I/O to the cluster by stopping the applications on the hosts that are using the VDisks provided by the cluster.
3. We have now completed the tasks required to shut down the cluster. To shut down the uninterruptible power supplies, just press the power button on their front panels. Note: To restart the cluster, you must first restart the uninterruptible power supply units by pressing the power button on their front panels. Then you go to the service panel of one of the nodes within the cluster and press the power on button. After it is fully booted up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the panel), you can start the other nodes in the same way. As soon as all nodes are fully booted, you can re-establish administrative contact using PuTTY, and your cluster is fully operational again.
7.8 Nodes
This section details the tasks that can be performed at an individual node level.
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,node1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,node2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,node3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 4,node4,100066C108,50050768010027E2,online,1,io_grp1,no,20400001864C1008,8G4 IBM_2145:ITSO-CLS1:admin>svcinfo lsnode node1 id 1 name node1 UPS_serial_number 1000739007 WWNN 50050768010037E5
Chapter 7. SVC operations using the CLI
387
status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node yes UPS_unique_id 20400001C3240007 port_id 50050768014037E5 port_status active port_speed 4Gb port_id 50050768013037E5 port_status active port_speed 4Gb port_id 50050768011037E5 port_status active port_speed 4Gb port_id 50050768012037E5 port_status active port_speed 4Gb hardware 8G4
UPS_unique_id
20400001864C1008 8G4 20400001C3240004 8G4
hardware
Note: The node you want to add must be on a different UPS serial number than the UPS on the first node.
Example 7-100 svcinfo lsnode
388
Now that we know the available nodes, we can use the svctask addnode command to add the node to the SVC cluster configuration. The command to add a node to the SVC cluster is shown in Example 7-101.
Example 7-101 svctask addnode (wwnodename)
IBM_2145:ITSO-CLS1:admin>svctask addnode -wwnodename 50050768010027E2 -name Node2 -iogrp io_grp0 Node, id [2], successfully added This command adds the candidate node with the wwnodename 50050768010027E2 to the I/O group called io_grp0. We used the -wwnodename parameter (50050768010027E2), but we could have used the -panelname parameter (108283) instead (Example 7-102) since if you are standing in front of the node it is easier to read the panelname instead of the WWNN.
Example 7-102 svctask addnode (panelname)
IBM_2145:ITSO-CLS1:admin>svctask addnode -panelname 108283 -name Node2 -iogrp io_grp0 We also used the optional -name parameter (Node2). If you do not provide the -name parameter, the SVC automatically generates the name nodeX (where X is the ID sequence number assigned by the SVC internally). Note: If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word node, since this prefix is reserved for SVC assignment only. If the svctask addnode command returns no information and your second node is powered on and the zones are correctly defined, then pre-existing cluster configuration data can be stored in it. If you are sure this node is not part of another active SVC cluster, you can use the service panel to delete the existing cluster information. After this is complete, re-issue the svcinfo lsnodecandidate command and you should see it listed.
IBM_2145:ITSO-CLS1:admin>svctask chnode -name ITSO_CLS1_Node1 4 This command renames node id 4 to ITSO_CLS1_Node1. Note: The chnode command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word node, since this prefix is reserved for SVC assignment only.
389
IBM_2145:ITSO-CLS1:admin>svctask rmnode node4 This command removes node4 from the SVC cluster. Since node4 was also the configuration node, the SVC transfers the configuration node responsibilities to a surviving node, within the I/O group. Unfortunately the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses communication and closes automatically. We must restart the PuTTY application to establish a secure session with the new configuration node. Important: If this is the last node in an I/O Group, and there are VDisks still assigned to the I/O Group, the node will not be deleted from the cluster. If this is the last node in the cluster, and the I/O Group has no Virtual Disks remaining, the cluster will be destroyed and all virtualization information will be lost. Any data that is still required should be backed up or migrated prior to destroying the cluster.
IBM_2145:ITSO-CLS1:admin>svctask stopcluster -node n4 Are you sure that you want to continue with the shut down? This command shuts down node n4 in a graceful manner. When this is done, the other node in the I/O Group will destage the contents of its cache and will go into write through mode until the node is powered up and rejoins the cluster. Note: There is no need to stop FlashCopy mappings, Remote Copy relationships, and data migration operations. The other cluster will handle this, but be aware that this cluster is a single point of failure now. If this is the last node in an I/O Group, all access to the Virtual Disks in the I/O Group will be lost. Ensure that this is what you want to do before executing this command and you will need to specify the -force flag. By re-issuing the svcinfo lsnode command (Example 7-106), we can see that the node is now offline.
Example 7-106 svcinfo lsnode
390
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,n1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,n2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,n3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 6,n4,100066C108,0000000000000000,offline,1,io_grp1,no,20400001864C1008,unknown IBM_2145:ITSO-CLS1:admin>svcinfo lsnode n4 CMMVC5782E The object specified is offline. Note: To restart the node: physically, from the service panel of the node, push the power on button. We have now completed the tasks required to view, add, delete, rename, and shut down a node within an SVC environment.
vdisk_count 3 4 0 0 0
host_count 3 3 2 2 0
As we can see, the SVC predefines five I/O groups. In a four node cluster (like ours), only two I/O groups are actually in use. The other I/O groups (io_grp2 and io_grp3) are for a six or eight node cluster. The recovery I/O group is a temporary home for VDisks when all nodes in the I/O group that normally owns them have suffered multiple failures. This allows us to move the VDisks to the recovery I/O group and then into a working I/O group. Of course, while temporarily assigned to the recovery I/O group, I/O access is not possible.
IBM_2145:ITSO-CLS1:admin>svctask chiogrp -name io_grpA io_grp1 This command renames the I/O group io_grp1 to io_grpA.
Chapter 7. SVC operations using the CLI
391
Note: The chiogrp command specifies the new name first. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word iogrp, since this prefix is reserved for SVC assignment only. To see whether the renaming was successful, issue the svcinfo lsiogrp command again and you should see the change reflected. We have now completed the tasks required to rename an I/O group.
IBM_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specifies a list of one or more I/O groups that must be mapped to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall specifies that all the I/O groups must be mapped to the specified host. This parameter is mutually exclusive with -iogrp. -host host_id_or_name Identify the host either by ID or name to which the I/O groups must be mapped. Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O group, as shown in Example 7-110.
Example 7-110 svctask rmhostiogrp command
IBM_2145:ITSO-CLS1:admin>svctask rmhostiogrp -iogrp 0 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specifies a list of one or more I/O groups that must be unmapped to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall specifies that all the I/O groups must be unmapped to the specified host. This parameter is mutually exclusive with -iogrp. -force If the removal of a host to I/O group mapping will result in the loss of VDisk to host mappings, then the command must fail if the -force flag has not been used. The -force flag will, however, override such behavior and force the host to I/O group mapping to be deleted. host_id_or_name Identify the host either by ID or name to which the I/O groups must be mapped.
392
IBM_2145:ITSO-CLS1:admin>svcinfo lshostiogrp Kanaga id name 1 io_grp1 To list all the host objects mapped to the specified I/O group, use the svcinfo lsiogrphost command, as shown in Example 7-112.
Example 7-112 svcinfo lsiogrphost
IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1 id name 1 Nile 2 Kanaga 3 Siam Where iogrp_1 is the I/0 group name.
IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor
lsusergrp remote no no no no no
An example of the simple creation of a user is shown in Example 7-114. User John is added to the user group Monitor with the password m0nitor.
393
IBM_2145:ITSO-CLS1:admin>svctask mkuser -name John -usergrp Monitor -password m0nitor User, id [2], successfully created IBM_2145:ITSO-CLS1:admin> Local users are those users not authenticated by a remote authentication server. Remote users are those users that are authenticated by a remote central registry server The user groups have already a defined authority role as shown in Table 7-2.
Table 7-2 Authority roles User group Security Admin Administrator Role All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, settime All svcinfo commands. svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser svcconfig: backup User Superusers Administrators that control the SVC.
Copy Operator
Service
Is for those that perform service maintenance and other hardware tasks on the cluster.
Monitor
394
IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor
lsusergrp remote no no no no no
To view our current defined users and to what user groups they belong to, we use the command svcinfo lsuser as show in Example 7-116 on page 395.
Example 7-116 svcinfo lsuser
395
This means that actions performed using both the native GUI and the SVC Console are recorded in the audit log. There are some commands that are not audited as show in the following list: svctask svctask svctask svctask svctask cpdumps cleardumps finderr dumperrlog dumpinternallog
The audit log contains around 1 MB of data which can contain about 6000 average length commands. When this log is full the cluster copies it to a new file in the /dumps/audit directory on the config node and resets the in-memory audit log. To display entries from the audit log use the svcinfo catauditlog -first 5 command to return a list of five in-memory Audit Log entries, as shown in Example 7-117, svcinfo catauditlog command.
Example 7-117 catauditlog command
IBM_2145:ITSO-CLS1:admin>svcinfo catauditlog -first 5 -delim , 291,090904200329,superuser,10.64.210.231,0,,svctask mkvdiskhostmap -host 1 21 292,090904201238,admin,10.64.210.231,0,,svctask chvdisk -name swiss_cheese 21 293,090904204314,superuser,10.64.210.231,0,,svctask chhost -name ITSO_W2008 1 294,090904204314,superuser,10.64.210.231,0,,svctask chhost -mask 15 1 295,090904204410,admin,10.64.210.231,0,,svctask chvdisk -name SwissCheese 21
If you need to dump the contents of the in-memory audit log to a file on the current configuration node, use the command svctask dumpauditlog. This command does not provide any feedback, just the prompt. To obtain a list of the audit log dumps, use svcinfo lsauditlogdumps, as described in Example 7-118.
Example 7-118 svctask dumpauditlog / svcinfo lsauditlogdumps command
396
Scenario description
We use the following scenario in both the command line section and the GUI section. In the following scenario, we want to FlashCopy the following VDisks: DB_Source Log_Source App_Source Database files Database log files Application files
Since data integrity must be kept on DB_Source and Log_Source, we create consistency groups to handle the FlashCopy of DB_Source and Log_Source. In our scenario, the application files are independent of the database, so we create a single FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and Log_Source, and thereby two consistency groups. The scenario is shown in Example 7-126 on page 402.
397
Log_Source FlashCopy to Log_Target2, the mapping name is Log_Map2 App_Source FlashCopy to App_Target1, the mapping name is App_Map1 Copyrate 50
IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG1 FlashCopy Consistency Group, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG2 FlashCopy Consistency Group, id [2], successfully created In Example 7-120, we checked the status of consistency groups. Each has a status of empty.
Example 7-120 Checking the status
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 empty 2 FCCG2 empty If you would like to change the name of a consistency group, you can use the svctask chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.
398
If no consistency group is defined, the mapping is assigned into the default group 0. This is a special group that cannot be started as a whole. Mappings in this group can only be started on an individual basis. The background copy rate specifies the priority that should be given to completing the copy. If 0 is specified, the copy will not proceed in the background. The default is 50. Tip: There is a parameter to delete FlashCopy mappings automatically after completion of a background copy (when the mapping gets to the idle_or_copied state). Use the command: svctask mkfcmap -autodelete This command does not delete mappings in cascade with dependent mappings because it would not get to the idle_or_copied state. In Example 7-121, the first FlashCopy mapping for DB_Source and Log_Source is created.
Example 7-121 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1 -name DB_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target_1 -name Log_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Target_1 -name App_Map1 FlashCopy Mapping, id [2], successfully created Example 7-122 shows the command to create a second FlashCopy mapping for VDisk DB_Source and Log_Source.
Example 7-122 Create additional FlashCopy mappings
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [3], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [4], successfully created Example 7-123 shows the result of these FlashCopy mappings. The status of the mapping is idle_or_copied.
Example 7-123 Check the result of Multi-Target FlashCopy mappings
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 idle_or_copied 0 50 100 off
Chapter 7. SVC operations using the CLI
no 399
1 Log_Map1 1 Log_Source Log_Target_1 1 FCCG1 idle_or_copied 50 100 off 2 App_Map1 2 App_Source App_Target_1 idle_or_copied 50 100 off 3 DB_Map2 0 DB_Source DB_Target_2 2 FCCG2 idle_or_copied 50 100 off 4 Log_Map2 1 Log_Source Log_Target_2 2 FCCG2 idle_or_copied 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied
4 0 no 3 0 no 7 0 no 5 0 no
If you would like to change the FlashCopy mapping, you can use the svctask chfcmap command. Type svctask chfcmap -h to get help with this command.
IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id 400
SAN Volume Controller V5.1
group_name status prepared progress 0 copy_rate 50 start_time dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no
IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status prepared autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2
401
IBM_2145:ITSO-CLS1:admin>svctask startfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 prepared 0 50 100 off 1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 prepared 0 50 100 off 2 App_Map1 2 App_Source 3 App_Target_1 copying 0 50 100 off 3 DB_Map2 0 DB_Source 7 DB_Target_2 2 FCCG2 prepared 0 50 100 off 4 Log_Map2 1 Log_Source 5 Log_Target_2 2 FCCG2 prepared 0 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status copying progress 29 copy_rate 50 start_time 090826171647 dependent_mappings 0 autodelete off clean_progress 100 402
SAN Volume Controller V5.1
no
no
no
no
no
clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no
IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status copying autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map1 id progress 0 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map1 id progress 1 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map2 id progress
403
4 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map2 id progress 3 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress App_Map1 id progress 2 53 When the background copy has completed, the FlashCopy mapping enters the idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this status, the consistency group will be at idle_or_copied status. When in this state, the FlashCopy mapping can be deleted, and the target disk can be used independently, if, for example, another target disk is to be used for the next FlashCopy of the particular source VDisk.
Note: Stopping a FlashCopy mapping should only be done when the data on the target VDisk is not in use, or you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC, if the mapping is in the copying state and progress!=100. Example 7-129 shows how to stop App_Map1 FlashCopy. The status of App_Map1 has changed to idle_or_copied.
Example 7-129 Stop APP_Map1 FlashCopy
IBM_2145:ITSO-CLS1:admin>svctask stopfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status idle_or_copied progress 100 copy_rate 50 404
SAN Volume Controller V5.1
start_time 090826171647 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no
IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 stopped 2 FCCG2 stopped IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap -delim , id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_ id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p artner_FC_name,restoring 0,DB_Map1,0,DB_Source,6,DB_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 1,Log_Map1,1,Log_Source,4,Log_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 2,App_Map1,2,App_Source,3,App_Target_1,,,idle_or_copied,100,50,100,off,,,no 3,DB_Map2,0,DB_Source,7,DB_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no 4,Log_Map2,1,Log_Source,5,Log_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no
405
IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap IBM_2145:ITSO-CLS1:admin>
406
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8 id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 221.17MB free_capacity 220.77MB overallocation 462 autoexpand on warning 80
407
grainsize 32 2. Define a FlashCopy mapping in which the non-space-efficient VDisk is the source and the space-efficient VDisk is the target. Specify a copy_rate as high as possible and activate the autodelete option for the mapping. See Example 7-134.
Example 7-134 svctask mkfcmap
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Source_SE -name MigrtoSEV -copyrate 100 -autodelete FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0 id 0 name MigrtoSEV source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status idle_or_copied progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 3. Run the svctask prestartfcmap command and the svcinfo lsfcmap MigrtoSEV command, as shown in Example 7-135.
Example 7-135 svctask prestartfcmap
IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap MigrtoSEV IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV id 0 name MigrtoSEV source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status prepared progress 0 copy_rate 100 start_time 408
SAN Volume Controller V5.1
dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 4. Run the svctask startfcmap command, as shown in Example 7-136.
Example 7-136 svctask startfcmap IBM_2145:ITSO-CLS1:admin>svctask startfcmap MigrtoSEV
5. Monitor the copy process using the svcinfo lsfcmapprogress command, as shown in Example 7-137.
Example 7-137 svcinfo lsfcmapprogress IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress MigrtoSEV id progress 0 63
6. The FlashCopy mapping has been deleted automatically, as shown in Example 7-138.
Example 7-138 svcinfo lsfcmap
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV id 0 name MigrtoSEV source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status copying progress 73 copy_rate 100 start_time 090827095354 dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no
409
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV CMMVC5754E The specified object does not exist, or the name supplied does not meet the naming rules. An independent copy of the source VDisk (App_Source) has been created. The migration has completed, as shown in Example 7-139.
Example 7-139 svcinfo lsvdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk App_Source_SE id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.77MB overallocation 99 autoexpand on warning 80 grainsize 32
410
Note: Independent of what you defined as the real size of the target SEV, the real size will be at least the capacity of the source VDisk. To migrate a space-efficient VDisk to a fully allocated VDisk, you can follow the same scenario.
IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk0 -target vdsk1 -name FCMAP0 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep FCMAP0 IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk1 -target vdsk0 -name FCMAP0_rev FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep -restore FCMAP0_rev id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:group_ id:group_name:status:progress:copy_rate:clean_progress:incremental:partner_FC_id:p artner_FC_name:restoring 0:FCMAP0:75:vdsk0:76:vdsk1:::copying:0:10:99:off:1:FCMAP0_rev:no 1:FCMAP0_rev:76:vdsk1:75:vdsk0:::copying:99:50:100:off:0:FCMAP0:yes FCMAP0_rev will show a restoring value of "yes" while the FlashCopy mapping is copying. Once it has finished copying, the restoring value field will change to "no".
411
Stopping A-> B without -split will result in the cascade B->C. Note that the backup disk B is now at the head of this cascade. When the user next wants to take a backup to B, they can still start mapping A->B (using the -restore flag), but they cannot then reverse the mapping to A (B->A or C-> A). Stopping A-> B with -split would have resulted in the cascade A -> C. This does not result in the same problem, because the production disk A is at the head of the cascade instead of the backup disk B.
Since data consistency is needed across the VDisks MM_DB_Pri and MM_DBLog_Pri, a consistency group CG_WIN2K3_MM is created to handle Metro Mirror relationships for them. While, in this scenario, application files are independent of the database, a stand-alone Metro Mirror relationship is created for the VDisk MM_App_Pri. The Metro Mirror setup is illustrated in Figure 7-3.
412
413
In the following section, each step is carried out using the CLI.
Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. As shown in Example 7-141, ITSO-CLS2 is an eligible SVC cluster candidate at ITSO-CLS1, for the SVC cluster partnership, and vice-versa. This confirms that both clusters are communicating with each other.
Example 7-141 Listing the available SVC cluster for partnership
IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2 Example 7-142 shows the output of the svcinfo lscluster command, before setting up the Metro Mirror relationship. We show it so you can compare with the same relationship after setting up the Metro Mirror relationship.
Example 7-142 Pre-verification of cluster configuration
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38
partnership
bandwidth
partnership
bandwidth
414
To check the status of the newly created partnership, issue the command svcinfo lscluster. Also notice that the new partnership is only partially configured. It will remain partially configured until the Metro Mirror relationship is created on the other node.
Example 7-143 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying partnership
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote fully_configured 50 0000020063E03A38 In Example 7-144, the partnership is created between ITSO-CLS4 back to ITSO-CLS1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both clusters by re-issuing the svcinfo lscluster command.
Example 7-144 Creating the partnership from ITSO-CLS2 to ITSO-CLS1 and verifying partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 50 000002006AE04FC4
IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_MM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type 0 CG_W2K3_MM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group
415
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=MM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 13 MM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000010 0 1 empty 14 MM_Log_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000011 0 1 empty 15 MM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000012 0 1 empty IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate id vdisk_name 0 DB_Source 1 Log_Source 2 App_Source 3 App_Target_1 4 Log_Target_1 5 Log_Target_2 6 DB_Target_1 7 DB_Target_2 8 App_Source_SE 9 FC_A 13 MM_DB_Pri 14 MM_Log_Pri 15 MM_App_Pri IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master MM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_Log_Sec 2 MM_App_Sec IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1 RC Relationship, id [13], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2 416
SAN Volume Controller V5.1
RC Relationship, id [14], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 13 MMREL1 000002006AE04FC4 ITSO-CLS1 13 MM_DB_Pri 0000020063E03A38 ITSO-CLS4 0 MM_DB_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro 14 MMREL2 000002006AE04FC4 ITSO-CLS1 14 MM_Log_Pri 0000020063E03A38 ITSO-CLS4 1 MM_Log_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster ITSO-CLS4 -name MMREL3 RC Relationship, id [15], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master
Chapter 7. SVC operations using the CLI
417
consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type metro sync in_sync copy_type metro
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>
418
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1 id 13 name MMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 13 master_vdisk_name MM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4
Chapter 7. SVC operations using the CLI
419
aux_vdisk_id 0 aux_vdisk_name MM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50 progress 35 freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL2 id 14 name MMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 14 master_vdisk_name MM_Log_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 1 aux_vdisk_name MM_Log_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50 progress 37 freeze_time status online sync copy_type metro When all Metro Mirror relationships have completed the background copy, the consistency group enters the consistent synchronized state, as shown in Example 7-151.
Example 7-151 Listing the Metro Mirror consistency group.
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 420
SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type metro
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4
Chapter 7. SVC operations using the CLI
421
master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 If afterwards we want to enable access (write I/O) to the secondary VDisk, re-issue svctask stoprcconsistgrp, specifying the -access flag, and the consistency group transits to the Idling state, as shown in Example 7-154.
Example 7-154 Stopping a Metro Mirror consistency group and enabling access to the secondary
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3
422
master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2
423
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38
424
aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
Chapter 7. SVC operations using the CLI
425
id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2
IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS2:admin>svcinfo lsclustercandidate id configured cluster_name 426
SAN Volume Controller V5.1
IBM_2145:ITSO-CLS3:admin>svcinfo lsclustercandidate id configured name 000002006AE04FC4 no ITSO-CLS1 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2
Example 7-160 shows the sequence of mkpartnership commands to be executed in order to get a Star Configuration.
Example 7-160 Star Configuration creation using the mkpartnership command
From ITSO-CLS1 to multiple clusters IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS3 to ITSO-CLS1
Chapter 7. SVC operations using the CLI
427
IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS4 to ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsisrgrp we need, ensuring that a single VDisk is only in one relationship.
Triangle Configuration
Figure 7-5 on page 429 shows the Triangle Configuration.
428
Example 7-161 on page 429 shows the sequence of mkpartnership commands to be executed in order to get a Triangle Configuration.
Example 7-161 Triangle Configuration creation
From ITSO-CLS1 to ITSO-CLS2 and ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS1 and ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3
429
After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsisrgrp we need, ensuring that a single VDisk is only in one relationship.
Fully-Connected Configuration
Figure 7-6 shows the Fully-Connected configuration.
Example 7-162 shows the sequence of mkpartnership commands to be executed in order to get a Fully-Connected Configuration.
Example 7-162 Fully-Connected creation
From ITSO-CLS1 to ITSO-CLS2, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS3 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4
430
From ITSO-CLS4 to ITSO-CLS1, ITSO-CLS2 and ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsistgrp we need, ensuring that a single VDisk is only in one relationship.
Daisy-Chaining Configuration
Figure 7-7 on page 432 shows the Daisy-Chaining configuration.
431
Example 7-163 shows the sequence of mkpartnership commands to be executed in order to get a Daisy-Chaining Configuration.
Example 7-163 Daisy-Chaining configuration creation
From ITSO-CLS1 to ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS2 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS4 to ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias
432
0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA From ITSO-CLS4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38
After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsisrgrp we need, ensuring that a single VDisk is only in one relationship.
Since data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a consistency group to handle Global Mirror relationships for them. While in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. The Global Mirror relationship setup is illustrated in Figure 7-8.
433
Consistency Group
CG_W2K3_GM
GM_DB_Pri
GM Relationship 1
GM_DB_Sec
GM_Dlog_Pri
GM Relationship 2
GM_DBlog_Sec
GM_App_Pri
GM Relationship 3
GM_App_Sec
434
Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. Example 7-164 confirms that our clusters are communicating, as ITSO-CLS4 is an eligible SVC cluster candidate, at ITSO-CLS1, for the SVC cluster partnership and vice versa. This confirms that both clusters are communicating with each other.
Example 7-164 Listing the available SVC clusters for partnership
In Example 7-142, we show the output of svcinfo lscluster, before setting up the SVC clusters partnership for Global Mirror. It is shown for comparison after we have set up the SVC partnership.
Example 7-165 Pre-verification of cluster configuration
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA IBM_2145:ITSO-CLS2:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03 A38
435
Example 7-166 Creating the partnership from ITSO-CLS1 to ITSO-CLS2 and verifying the partnership
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote partially_configured_local 10 0000020063E03A38 In Example 7-167, we create the partnership from ITSO-CLS4 back to ITSO-CLS1, specifying 10 MBps bandwidth to be used for the background copy. After creating the partnership, verify that the partnership is fully configured by re-issuing the svcinfo lscluster command.
Example 7-167 Creating the partnership from ITSO-CLS2 to ITSO-CLS1 and verify partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 10 000002006AE04FC4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote 0000020063E03A38
partnership
bandwidth
fully_configured 10
an application before full deployment of the Global Mirror feature. The delay simulation can be enabled separately for each of intracluster or intercluster Global Mirror. To enable this feature, you need to run the following command either for the intracluster or intercluster simulation: For intercluster: svctask chcluster -gminterdelaysimulation <inter_cluster_delay_simulation> For intracluster: svctask chcluster -gmintradelaysimulation <intra_cluster_delay_simulation> inter_cluster_delay_simulation and intra_cluster_delay_simulation express the amount of time (in milliseconds) secondary I/Os are delayed respectively for intercluster and intracluster relationships. These values specify the number of milliseconds that I/O activity, that is, copying a primary VDisk to a secondary VDisk, is delayed. A value from 0 to 100 milliseconds in 1 millisecond increments can be set for the cluster_delay_simulation in the commands above. A value of zero disables the feature. To check the current settings for the delay simulation, use the following command: svcinfo lscluster <clustername> In Example 7-168, we show the modification of the delay simulation value and a change of the Global Mirror link tolerance parameters. We also show the changed values of the Global Mirror link tolerance and delay simulation parameters.
Example 7-168 Delay simulation and link tolerance modification
IBM_2145:ITSO-CLS1:admin>svctask chcluster IBM_2145:ITSO-CLS1:admin>svctask chcluster IBM_2145:ITSO-CLS1:admin>svctask chcluster IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id 000002006AE04FC4 name ITSO-CLS1 location local partnership bandwidth total_mdisk_capacity 160.0GB space_in_mdisk_grps 160.0GB space_allocated_to_vdisks 19.00GB total_free_space 141.0GB statistics_status off statistics_frequency 15 required_memory 8192 cluster_locale en_US time_zone 520 US/Pacific code_level 5.1.0.0 (build 17.1.0908110000) FC_port_speed 2Gb console_IP id_alias 000002006AE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 email_reply email_contact email_contact_primary email_contact_alternate
437
email_contact_location email_state invalid inventory_mail_interval 0 total_vdiskcopy_capacity 19.00GB total_used_capacity 19.00GB total_overallocation 11 total_vdisk_capacity 19.00GB cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no relationship_bandwidth_limit 25
IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_GM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type 0 CG_W2K3_GM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group
438
Example 7-170 Creating Global Mirror GMREL1and GMREL2 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=GM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 16 GM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000013 0 1 empty 17 GM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000014 0 1 empty 18 GM_DBLog_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000015 0 1 empty IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master GM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_Log_Sec 2 MM_App_Sec 3 GM_App_Sec 4 GM_DB_Sec 5 GM_DBLog_Sec 6 SEV IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [10], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [17], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [18], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 17 GMREL1 000002006AE04FC4 ITSO-CLS1 17 GM_DB_Pri 0000020063E03A38 ITSO-CLS4 4 GM_DB_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global 18 GMREL2 000002006AE04FC4 ITSO-CLS1 18 GM_DBLog_Pri 0000020063E03A38 ITSO-CLS4 5 GM_DBLog_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global
439
already synchronized with the primary (master) virtual disk. The initial background synchronization is skipped when this option is used. GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created with the -sync option, so their auxiliary VDisks needs to be synchronized with their primary VDisks.
Example 7-171 Creating a stand-alone Global Mirror relationship and verifying it IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO-CLS4 -sync -name GMREL3 -global RC Relationship, id [16], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship -delim : id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_ name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority :progress:copy_type 16:GMREL3:000002006AE04FC4:ITSO-CLS1:16:GM_App_Pri:0000020063E03A38:ITSO-CLS4:3:GM_App_Sec:master:::consist ent_stopped:50:100:global 17:GMREL1:000002006AE04FC4:ITSO-CLS1:17:GM_DB_Pri:0000020063E03A38:ITSO-CLS4:4:GM_DB_Sec:master:0:CG_W2K3_G M:inconsistent_stopped:50:0:global 18:GMREL2:000002006AE04FC4:ITSO-CLS1:18:GM_DBLog_Pri:0000020063E03A38:ITSO-CLS4:5:GM_DBLog_Sec:master:0:CG_ W2K3_GM:inconsistent_stopped:50:0:global
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id
440
consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
441
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL1 id 17 name GMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 17 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 4 aux_vdisk_name GM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 38 freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL2 id 18 name GMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 18 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 5 aux_vdisk_name GM_DBLog_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 40 freeze_time status online sync copy_type global When all the Global Mirror relationships complete the background copy, the consistency group enters the consistent synchronized state, as shown in Example 7-151.
Example 7-175 Listing the Global Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38
442
aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type global
443
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 If, afterwards, we want to enable access (write I/O) for the secondary VDisk, we can re-issue the svctask stoprcconsistgrp command, specifying the -access parameter, and the consistency group transits to the Idling state, as shown in Example 7-154.
Example 7-178 Stopping a Global Mirror consistency group
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
444
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38
Chapter 7. SVC operations using the CLI
445
aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress
446
freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master
Chapter 7. SVC operations using the CLI
447
state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
448
#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 1445 1888 Errors 0 0 Paths 4 4 Active 4 4
#datapath query device Total Devices : 2 DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000000 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 OPEN NORMAL 0 0 1 fscsi1/hdisk7 OPEN NORMAL 972 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 784 0 1 fscsi1/hdisk8 OPEN NORMAL 0 0
449
Note: During a software upgrade, there are periods where not all of the nodes in the cluster are operational, and as a result the cache operates in write through mode. This will have an impact upon throughput, latency, and bandwidth aspects of performance. It is also worth double checking that your UPS power configuration is also set up correctly (even if your cluster is running without problems). Specifically, make sure: That your UPSs are all getting their power from an external source, and that they are not daisy chained. In other words, make sure that each UPS is not supplying power to another nodes UPS. That the power cable, and the serial cable coming from each node, go back to the same UPS. If the cables are crossed, and are going back to a different UPS, then during the upgrade, as one node is shut down, another node might also be mistakenly shut down. Important: Do not share the SVC UPS with any other devices. You must also ensure that all I/O paths are working for each host that is running I/O operations to the SAN during the software upgrade. You can check the I/O paths by using datapath query commands. You do not need to check for hosts that have no active I/O operations to the SAN during the software upgrade.
Procedure
To upgrade the SVC cluster software, perform the following steps: 1. Before starting the upgrade, you must back up the configuration (see 7.14.9, Backing up the SVC cluster configuration on page 464) and save the backup config file in a safe place. 2. Also, save the data collection for support diagnosis just in case of problems, as shown in Example 7-184.
Example 7-184 svc_snap
IBM_2145:ITSO-CLS1:admin>svc_snap Collecting system information... Copying files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Dumping error log... Creating snap package... Snap data collected in /dumps/snap.104643.080617.002427.tgz 3. List the dump generated by the previous command, as shown in Example 7-185.
Example 7-185 svcinfo ls2145dumps
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
svc.config.cron.bak_node1 dump.104643.070803.015424 dump.104643.071010.232740 svc.config.backup.bak_ITSOCL1_N1 svc.config.backup.xml_ITSOCL1_N1 svc.config.backup.tmp.xml svc.config.cron.bak_ITSOCL1_N1 dump.104643.080609.202741 104643.080610.154323.ups_log.tar.gz 104643.trc.old dump.104643.080609.212626 104643.080612.221933.ups_log.tar.gz svc.config.cron.bak_Node1 svc.config.cron.log_Node1 svc.config.cron.sh_Node1 svc.config.cron.xml_Node1 dump.104643.080616.203659 104643.trc ups_log.a snap.104643.080617.002427.tgz ups_log.b
4. Save the generated dump in a safe place using the pscp command, as shown in Example 7-186.
Example 7-186 pscp -load
C:\>pscp -load ITSOCL1 admin@9.43.86.117:/dumps/snap.104643.080617.002427.tgz c:\ snap.104643.080617.002427 | 597 kB | 597.7 kB/s | ETA: 00:00:00 | 100% 5. Upload the new software package using PuTTY Secure Copy. Enter the command as shown in Example 7-187.
Example 7-187 pscp -load
C:\>pscp -load ITSOCL1 IBM2145_INSTALL_4.3.0.0 admin@9.43.86.117:/home/admin/upgrade IBM2145_INSTALL_4.3.0.0-0 | 103079 kB | 9370.8 kB/s | ETA: 00:00:00 | 100% 6. Upload the SAN Volume Controller Software Upgrade Test Utility using PuTTY Secure Copy. Enter the command as shown in Example 7-188.
Example 7-188 Upload utility
C:\>pscp -load ITSOCL1 IBM2145_INSTALL_svcupgradetest_1.11 admin@9.43.86.117:/home/admin/upgrade IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%
7. Check that the packages were successfully delivered through the PuTTY command-line application by entering the svcinfo lssoftwaredumps command, as shown in Example 7-189.
Example 7-189 svcinfo lssoftwaredumps
451
0 1
IBM2145_INSTALL_4.3.0.0 IBM2145_INSTALL_svcupgradetest_1.11
8. Now that the packages are uploaded, first install the SAN Volume Controller Software Upgrade Test Utility, as shown in Example 7-190.
Example 7-190 svctask applysoftware
IBM_2145:ITSO-CLS1:admin>svctask applysoftware -file IBM2145_INSTALL_svcupgradetest_1.11 CMMVC6227I The package installed successfully. 9. Using the following command, test the upgrade for known issues that may prevent a software upgrade from completing successfully, as shown in Example 7-191.
Example 7-191 svcupgradetest
IBM_2145:ITSO-CLS1:admin>svcupgradetest svcupgradetest version 1.11. Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade. Important: If the above command produces any errors, troubleshoot the error using the maintenance procedures before continuing further. 10.Now use the svctask command set to apply the software upgrade, as shown in Example 7-192.
Example 7-192 Apply upgrade command example
IBM_2145:ITSOSVC42A:admin>svctask applysoftware -file IBM2145_INSTALL_4.3.0.0 While the upgrade is running, you can check the status, as shown in Example 7-193.
Example 7-193 Check update status
IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwareupgradestatus status upgrading 11.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted in turn. If a node does not restart automatically during the upgrade, it will have to be repaired manually. Note: If you are using SSD the data of the SSD within the restarted node will not be available during the reboot. 12.Eventually both nodes should display Cluster: on line one on the SVC front panel and the name of your cluster on line 2. Be prepared for a long wait (in our case, we waited approximately 40 minutes).
452
Note: During this process, both your CLI and GUI vary from sluggish (very slow) to unresponsive. The important thing is that I/O to the hosts can continue. 13.To verify that the upgrade was successful, you can perform either of the following options: Run the svcinfo lscluster and svcinfo lsnodevpd commands, as shown in Example 7-194. We have truncated the lscluster and lsnodevpd information for this example.
Example 7-194 svcinfo lscluster and lsnodevpd commands
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 id 0000020060806FCA name ITSO-CLS1 location local partnership bandwidth cluster_IP_address 9.43.86.117 cluster_service_IP_address 9.43.86.118 total_mdisk_capacity 756.0GB space_in_mdisk_grps 756.0GB space_allocated_to_vdisks 156.00GB total_free_space 600.0GB statistics_status off statistics_frequency 15 required_memory 8192 cluster_locale en_US SNMP_setting none SNMP_community SNMP_server_IP_address 0.0.0.0 subnet_mask 255.255.252.0 default_gateway 9.43.85.1 time_zone 522 UTC email_setting email_id code_level 4.3.0.0 (build 8.15.0806110000) FC_port_speed 2Gb console_IP 9.43.86.115:9080 id_alias 0000020060806FCA gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 email_server 127.0.0.1 email_server_port 25 email_reply itsotest@ibm.com email_contact ITSO User email_contact_primary 555-1234 email_contact_alternate email_contact_location ITSO email_state running email_user_count 1 inventory_mail_interval 0 cluster_IP_address_6 cluster_service_IP_address_6 prefix_6
453
IBM_2145:ITSO-CLS1:admin>svcinfo lsnodevpd 1 id 1 system board: 24 fields part_number 31P0906 system_serial_number 13DVT31 number_of_processors 4 number_of_memory_slots 8 number_of_fans 6 number_of_FC_cards 1 number_of_scsi/ide_devices 2 BIOS_manufacturer IBM BIOS_version -[GFE136BUS-1.09]BIOS_release_date 02/08/2008 system_manufacturer IBM system_product IBM System x3550 -[21458G4]. . software: 6 fields code_level 4.3.0.0 (build 8.15.0806110000) node_name Node1 ethernet_status 1 WWNN 0x50050768010037e5 id 1 Copy the error log to your management workstation, as explained in 7.14.2, Running maintenance procedures on page 454. Open it in WordPad and search for Software Install completed. You have now completed the tasks required to upgrade the SVC software.
IBM_2145:ITSO-CLS2:admin>svctask dumperrlog This generates a file called errlog_timestamp, such as errlog_100048_080618_042419, where: errlog is part of the default prefix for all error log files.
454
100048 is the panel name of the current configuration node. 080618 is the date (YYMMDD). 042419 is the time (HHMMSS). You can add the -prefix parameter to your command to change the default prefix of errlog to something else (Example 7-196).
Example 7-196 svctask dumperrlog -prefix
IBM_2145:ITSO-CLS2:admin>svctask dumperrlog -prefix svcerrlog This command creates a file called svcerrlog_timestamp. To see what the file name is, you must enter the following command (Example 7-197).
Example 7-197 svcinfo lserrlogdumps
IBM_2145:ITSO-CLS2:admin>svcinfo lserrlogdumps id filename 0 errlog_100048_080618_042049 1 errlog_100048_080618_042128 2 errlog_100048_080618_042355 3 errlog_100048_080618_042419 4 errlog_100048_080618_175652 5 errlog_100048_080618_175702 6 errlog_100048_080618_175724 7 errlog_100048_080619_205900 8 errlog_100048_080624_170214 9 svcerrlog_100048_080624_170257 Note: A maximum of ten error log dump files per node will be kept on the cluster. When the eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note that the directory might also hold log files retrieved from other nodes. These files are not counted. The SVC will delete the oldest file (when necessary) for this node in order to maintain the maximum number of files. The SVC will not delete files from other nodes unless you issue the cleandumps command. After you generate your error log, you can issue the svctask finderr command to scan it for any unfixed errors, as shown in Example 7-198.
Example 7-198 svctask finderr
IBM_2145:ITSO-CLS2:admin>svctask finderr Highest priority unfixed error code is [1230] As you can see, we have one unfixed error on our system. To get it analyzed, you need to download it onto your own PC. To know more about this unfixed error, you need to look at the error log in more detail. Use the PuTTY Secure Copy process to copy the file from the cluster to your local management workstation, as shown in Example 7-199 on page 455.
Example 7-199 pscp command: Copy error logs off SVC
455
C:\Program Files\PuTTY>pscp -load SVC_CL2 admin@9.43.86.119:/dumps/elogs/svcerrlog_100048_080624_170257 c:\temp\svcerrlog.txt svcerrlog.txt | 6390 kB | 3195.1 kB/s | ETA: 00:00:00 | 100% In order to use the Run option, you must know where your pscp.exe is located. In this case, it is in C:\Program Files\PuTTY\. This command copies the file called svcerrlog_100048_080624_170257 to the C:\temp directory on our local workstation and calls the file svcerrlog.txt. Open the file in WordPad (Notepad does not format the screen as well). You should see information similar to what is shown in Example 7-200. The list was truncated for the purposes of this example.
Example 7-200 errlog in WordPad
Error Log Entry 400 Node Identifier Object Type Object ID Copy ID Sequence Number Root Sequence Number First Error Timestamp Last Error Timestamp Error Count Error ID Error Code Status Flag Type Flag 03 33 33 04 00 00 00 00 00 44 00 00 00 00 00 00 00 17 33 04 00 00 00 00 00 B8 00 00 00 00 00 00 03 A0 05 00 00 00 00 00 00 00 00 00 00 00 00 00 00 05 0B 01 00 00 00 00 00 20 00 00 00 00 00 00
: : : : : : : : : : : : : : : 31 00 00 00 00 00 00 00
Node2 device 0 37404 37404 Sat Jun 21 00:08:21 2008 Epoch + 1214006901 Sat Jun 21 00:11:36 2008 Epoch + 1214007096 2 10013 : Login Excluded 1230 : Login excluded UNFIXED TRANSIENT ERROR 44 11 00 00 00 00 00 00 17 01 01 00 00 00 00 00 B8 00 00 00 00 00 00 00 A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04 01 01 00 00 00 00 00 20 00 00 00 00 00 00 00
Scrolling through, or searching for the term unfixed, you should find more detail about the problem. There can be more entries in the errorlog that have the status of unfixed. After you take the necessary steps to rectify the problem, you can mark the error as fixed in the log by issuing the svctask cherrstate command against its sequence numbers (Example 7-201).
Example 7-201 svctask cherrstate
IBM_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37404 If you accidentally mark the wrong error as fixed, you can mark it as unfixed again by entering the same command and appending the -unfix flag to the end, as shown in Example 7-202. 456
SAN Volume Controller V5.1
IBM_2145:ITSO-CLS2:admin>svctask mksnmpserver -error on -warning on -info on 9.43.86.160 -community SVC SNMP Server id [1] successfully created
-ip
This command sends all errors, warning and informational events to the SVC community on the SNMP manager with the IP address 9.43.86.160.
IBM_2145:ITSO-CLS2:admin>svctask mksyslogserver -ip 10.64.210.231 -name Syslogserv1 Syslog Server id [1] successfully created When we have configured our syslog server we can display what syslog servers have currently have configured in our cluster as shown in Example 7-205.
Example 7-205 svcinfo lssyslogserver
IBM_2145:ITSO-CLS2:admin>svcinfo lssyslogserver id name IP_address facility error warning info 0 Syslogsrv 10.64.210.230 on on on 1 Syslogserv1 10.64.210.231 on on on
4 0
457
IBM_2145:ITSO-CLS1:admin>svctask mkemailserver -ip 192.168.1.1 Email Server id [0] successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsemailserver 0 id 0 name emailserver0 IP_address 192.168.1.1 port 25 We can configure an email user that will receive email notifications from the SVC cluster. We can have 12 users defined to receive emails from our SVC. Using the command svcinfo lsemailuser we can verify who is already registered an what kind of information is sent to that user as shown in Example 7-207.
Example 7-207 .svcinfo lsemailuser
IBM_2145:ITSO-CLS2:admin>svcinfo lsemailuser id name address user_type error warning info 0 IBM_Support_Center callhome0@de.ibm.com support on off off
inventory on
We can also create a new user as shown in Example 7-208 for a SAN administrator.
Example 7-208 svctask mkemailuser
IBM_2145:ITSO-CLS2:admin>svctask mkemailuser -address SANadmin@ibm.com -error on -warning on -info on -inventory on User, id [1], successfully created
Unfixed errors: Errors that were detected and recorded in the cluster error log and that have not yet been corrected or repaired. Fixed errors: Errors that were detected and recorded in the cluster error log and that have subsequently been corrected or repaired. To display the error log, use the svcinfo lserrlog or svcinfo caterrlog commands, as shown in Example 7-209 (the output is the same).
Example 7-209 svcinfo caterrlog command
IBM_2145:ITSOSVC42A:admin>svcinfo caterrlog -delim : id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_ number:first_timestamp:last_timestamp:number_of_errors:error_code 0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101 0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101 12:_grp:no:no:5:SVCNode_1:0:0:070606094858:070606094858:1:00990145 12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094539:070606094539:1:00990173 0:internal:no:no:5:SVCNode_1:0:0:070606094507:070606094507:1:00990219 12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094208:070606094208:1:00990148 12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094139:070606094139:1:00990145 ......... IBM_2145:ITSO-CLS1:admin>svcinfo caterrlog -delim , id,type,fixed,SNMP_trap_raised,error_type,node_name,sequence_number,root_sequence_ number,first_timestamp,last_timestamp,number_of_errors,error_code,copy_id 0,cluster,no,yes,6,n4,171,170,080624115947,080624115947,1,00981001, 0,cluster,no,yes,6,n4,170,170,080624115932,080624115932,1,00981001, 0,cluster,no,no,5,n1,0,0,080624105428,080624105428,1,00990101, 0,internal,no,no,5,n1,0,0,080624095359,080624095359,1,00990219, 0,internal,no,no,5,n1,0,0,080624094301,080624094301,1,00990220, 0,internal,no,no,5,n1,0,0,080624093355,080624093355,1,00990220, 11,vdisk,no,no,5,n1,0,0,080623150020,080623150020,1,00990183, 4,vdisk,no,no,5,n1,0,0,080623145958,080623145958,1,00990183, 5,vdisk,no,no,5,n1,0,0,080623145934,080623145934,1,00990183, 11,vdisk,no,no,5,n1,0,0,080623145017,080623145017,1,00990182, 6,vdisk,no,no,5,n1,0,0,080623144153,080623144153,1,00990183, . This command views the error log that was last generated. Use the method described in 7.14.2, Running maintenance procedures on page 454 to upload and analyze the error log in more detail. To clear the error log, you can issue the svctask clearerrlog command, as shown in Example 7-210.
Example 7-210 svctask clearerrlog
IBM_2145:ITSO-CLS1:admin>svctask clearerrlog Do you really want to clear the log? y Using the -force flag will stop any confirmation requests from appearing. When executed, this command will clear all entries from the error log. This will proceed even if there are unfixed errors in the log. It also clears any status events that are in the log.
459
This is a destructive command for the error log and should only be used when you have either rebuilt the cluster, or when you have fixed a major problem that has caused many entries in the error log that you do not wish to manually fix.
IBM_2145:ITSO-CLS1:admin>svcinfo lslicense used_flash 0.00 used_remote 0.00 used_virtualization 0.74 license_flash 50 license_remote 20 license_virtualization 80 The current license settings for the cluster are displayed in the viewing license settings log panel. These settings show whether you are licensed to use the FlashCopy, Metro Mirror, Global Mirror, or Virtualization features. They also show the storage capacity that is licensed for virtualization. Typically, the license settings log contains entries because feature options must be set as part of the Web-based cluster creation process. Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro Mirror and Global Mirror feature. The command you need to enter is shown in Example 7-212.
Example 7-212 -svctask chlicense
IBM_2145:ITSO-CLS1:admin>svctask chlicense -remote 25 To turn a feature off, just add 0 TB as capacity for the feature you want to disable. To verify that the changes you have made are reflected in your SVC configuration, you can issue the svcinfo lslicense command as before (see Example 7-213).
Example 7-213 svcinfo lslicense command: Verifying changes
IBM_2145:ITSO-CLS1:admin>svcinfo lslicense used_flash 0.00 used_remote 0.00 used_virtualization 0.74 license_flash 50 license_remote 25 license_virtualization 80
460
lserrlogdumps lsfeaturedumps lsiotracedumps lsiostatsdumps lssoftwaredumps ls2145dumps If no node is specified, the dumps that are available on the configuration node are listed.
IBM_2145:ITSO-CLS1:admin>svcinfo lserrlogdumps id filename 0 errlog_104643_080617_172859 1 errlog_104643_080618_163527 2 errlog_104643_080619_164929 3 errlog_104643_080619_165117 4 errlog_104643_080624_093355 5 svcerrlog_104643_080624_094301 6 errlog_104643_080624_120807 7 errlog_104643_080624_121102 8 errlog_104643_080624_122204 9 errlog_104643_080624_160522
461
collection of the I/O trace data is started by using the svctask starttrace command. The I/O trace data collection is stopped when the svctask stoptrace command is used. When the trace is stopped, the data is written to the file. The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name, and prefix is the value entered by the user for the -filename parameter in the svctask settrace command. The command to list all dumps in the /dumps/iotrace directory is svcinfo lsiotracedumps (Example 7-216).
Example 7-216 svcinfo lsiotracedumps
Software dump
The svcinfo lssoftwaredump command lists the contents of the /home/admin/upgrade directory. Any files in this directory are copied there at the time that you want to perform a software upgrade. Example 7-218 shows the command.
Example 7-218 svcinfo lssoftwaredumps
462
Note: The following rules apply to the use of wildcards with the SAN Volume Controller CLI: The wildcard character is an asterisk (*). The command can contain a maximum of one wildcard. When you use a wildcard, you must surround the filter entry with double quotation marks (""), as follows:
>svctask cleardumps -prefix "/dumps/elogs/*.txt"
IBM_2145:ITSO-CLS1:admin>svctask cpdumps -prefix /dumps/configs n4 Now that you have copied the configuration dump file from Node n4 to your configuration node, you can use PuTTY Secure Copy to copy the file to your management workstation for further analysis, as described earlier. To clear the dumps, you can run the svctask cleardumps command. Again, you can append the node name if you want to clear dumps off a node other than the current configuration node (the default for the svctask cleardumps command). The commands in Example 7-220 clear all logs or dumps from the SVC Node n1.
Example 7-220 svctask cleardumps command
n1
463
Prerequisites
You must have the following prerequisites in place: All nodes must be online.
464
No object name can begin with an underscore. All objects should have non-default names, that is, names that are not assigned by the SAN Volume Controller. Although we recommend that objects have non-default names at the time the backup is taken, this is not mandatory. Objects with default names are renamed when they are restored. Example 7-222 shows an example of the svcconfig backup command.
Example 7-222 svcconfig backup command
IBM_2145:ITSO-CLS1:admin>svcconfig backup ...... CMMVC6130W Inter-cluster partnership fully_configured will not be restored ................... CMMVC6112W io_grp io_grp0 has a default name CMMVC6112W io_grp io_grp1 has a default name CMMVC6112W mdisk mdisk18 has a default name CMMVC6112W mdisk mdisk19 has a default name CMMVC6112W mdisk mdisk20 has a default name ................ CMMVC6136W No SSH key file svc.config.admin.admin.key CMMVC6136W No SSH key file svc.config.admincl1.admin.key CMMVC6136W No SSH key file svc.config.ITSOSVCUser1.admin.key ....................... CMMVC6112W vdisk vdisk7 has a default name ................... CMMVC6155I SVCCONFIG processing completed successfully Example 7-223 shows the pscp command.
Example 7-223 pscp command
C:\Program Files\PuTTY>pscp -load SVC_CL1 admin@9.43.86.117:/tmp/svc.config.backup.xml c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100% The following scenario illustrates the value of configuration backup: 1. Use the svcconfig command to create a backup file on the cluster that contains details about the current cluster configuration. 2. Store the backup configuration on some form of tertiary storage. You must copy the backup file from the cluster or it becomes lost if the cluster crashes. 3. If a severe enough failure occurs, the cluster might be lost. Both configuration data (for example, the cluster definitions of hosts, I/O groups, MDGs, and MDisks) and the application data on the virtualized disks are lost. In this scenario, it is assumed that the application data can be restored from normal client backup procedures. However, before you can carry this out, you must reinstate the cluster, as configured at the time of the failure. This means you restore the same MDGs, I/O groups, host definitions, and the VDisks that existed prior to the failure. Then you can copy the application data back onto these VDisks and resume operations. 4. Recover the hardware. This includes hosts, SVCs, disk controller systems, disks, and SAN fabric. The hardware and SAN fabric must physically be the same as those used before the failure.
465
5. Re-initialize the cluster, just with the config node; the other nodes will be recovered restoring the configuration. 6. Restore your cluster configuration using the backup configuration file generated prior to the failure. 7. Restore the data on your virtual disks (VDisks) using your preferred restore solution or with help from IBM Service. 8. Resume normal operations.
466
467
468
81
Chapter 8.
469
From the Welcome screen select the Work with Virtual Disks option, and select the Virtual Disks link.
Table filtering
When you are in the Viewing Virtual Disks list, you can use the table filter option to filter the visible list, which is useful if the list of entries is too large to work with. You can change the filtering here as many times as you like, to further reduce the lists or for different views. Use the Filter Row Icon, as shown in Figure 8-2, or use the Show Filter Row option in the drop-down menu and click Go.
470
This enables you to filter based on the column names, as shown in Figure 8-3. The Filter under each column name shows that no filter is in effect for that column.
If you want to filter on a column, click the word Filter, which opens up a filter dialog, as shown in Figure 8-4 on page 472.
471
A list with VDisks is displayed that only contains 01 somewhere in the name, as shown in Figure 8-5. (Notice the filter line under each column heading showing that our filter is in place.) If you want, you can perform some additional filtering on the other columns to further narrow your view.
The option to reset the filters is shown in Figure 8-6. Use the Clear All Filters icon or use the Clear All Filters option in the drop-down menu and click Go.
472
Sorting
Regardless of whether you use the pre-filter or additional filter options, when you are in the Viewing Virtual Disks window, you can sort the displayed data by selecting Edit Sort from the list and clicking Go, or you can click the small icon highlighted by the mouse pointer in Figure 8-7.
As shown in Figure 8-8, you can sort based on up to three criteria, including Name, State, I/O Group, MDisk Group, Capacity (MB), Space-Efficient, Type, Hosts, FlashCopy Pair, FlashCopy Map Count, Relationship Name, UID, and Copies. Note: The actual sort criteria differs based on the information that you are sorting.
473
When you finish making your choices, click OK to regenerate the display based on your sorting criteria. Look at the icons next to each column name to see the sort criteria currently in use, as shown in Figure 8-9. If you want to clear the sort, simply select Clear All Sorts from the list and click Go, or click the icon highlighted by the mouse pointer in Figure 8-9.
474
8.1.2 Documentation
If you need to access the online documentation, in the upper right corner of the window, click the icon. This opens the Help Assistant pane on the left side of the window, as shown in Figure 8-10.
8.1.3 Help
If you need to access the online help, in the upper right corner of the window, click the icon. This opens a new window called Information Center. Here you can search on any item you want help for (see Figure 8-11 on page 476).
475
476
Figure 8-12 Showing possible processes to view where MDisk is being removed from MDG
3. When you click the controller Name (Figure 8-13), the Viewing General Details window (Figure 8-14) opens for the controller (where Name is the Controller you selected). Review the details and click Close to return to the previous window.
477
3. You return to the Disk Controller Systems window. You should now see the new name of your controller displayed. Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number, the dash, or the word controller, because this prefix is reserved for SVC assignment only.
478
479
Tip: If at any time the content in the right side of frame is abbreviated, you can minimize the My Work column by clicking the arrow to the right of the My Work heading at the top right of the column (highlighted with the mouse pointer in Figure 8-17 on page 479). After you minimize the column, you see an arrow in the far left position in the same location where the My Work column formerly appeared. 2. Review the details and then click Close to return to the previous window.
480
Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number, the dash, or the word MDisk, because this prefix is reserved for SVC assignment only.
481
After you take the necessary corrective action to repair the MDisk (for example, replace the failed disk and repair SAN zones), you can tell the SVC to include the MDisk again.
2. You now see a subset (specific to the MDisk you chose in the previous step) of the View Virtual Disks window in Figure 8-22. We cover the View Virtual Disks window in more detail in 8.4, Working with hosts on page 493.
482
To retrieve information about a specific MDG, perform the following steps: 1. In the Viewing Managed Disk Groups window (Figure 8-23), click the underlined name of any MDG in the list. 2. In the View MDisk Group Details window (Figure 8-24), you see more detailed information about the specified MDisk. Here you see information pertaining to the number of MDisks and VDisks as well as the capacity (both total and free space) within the MDG. When you finish viewing the details, click Close to return to the previous window.
483
1. From the SVC Welcome screen (Figure 8-1 on page 470), select the Work with Managed Disks option and then the Managed Disks Groups link. 2. The Viewing Managed Disks Groups window opens (see Figure 8-25 on page 484). Select Create an MDisk Group from the list and click Go.
3. In the window Create Managed Disk Group, the wizard will give you an overview of what will be done. Click Next. 4. While in the window Name the group and select the managed disks (Figure 8-26 on page 485), follow these steps: a. Type a name for the MDG. Note: If you do not provide a name, the SVC automatically generates the name MDiskgrpX, where X is the ID sequence number assigned by the SVC internally. If you want to provide a name (as we have), you can use the letters A to Z, a to z, numbers 0 to 9, and the underscore. It can be between one and 15 characters in length and is case sensitive, but cannot start with a number or the word MDiskgrp, because this prefix is reserved for SVC assignment only. b. From the MDisk Candidates box as shown in Figure 8-26, one at a time, select the MDisks to put into the MDG. Click Add to move them to the Selected MDisks box. There may be more than one page of disks; you may navigate between the windows (the MDisks you selected will be preserved). c. You can specify a threshold to send a warning to the error log when the capacity is first exceeded. It can either be a percentage or a specific amount. d. Click Next.
484
Figure 8-26 Name the group and select the managed disks window
5. From the list shown in Figure 8-27, select the extent size to use. When you select a specific extent size, typical value is 512, it will display the total cluster size in TB. Select Next.
6. In the window Verify Managed Disk Group (Figure 8-28), verify that the information specified is correct. Click Finish.
485
7. Return to the Viewing Managed Disk Groups window (Figure 8-29) where the MDG is displayed.
486
From the Renaming Managed Disk Group MDGname window (where MDGname is the MDG you selected in the previous step), type the new name you want to assign and click OK (see Figure 8-31). You can also set/change the usage threshold from this window. Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, a dash, and the underscore. It can be between one and 15 characters in length, but cannot start with a number, a dash, or the word mdiskgrp, because this prefix is reserved for SVC assignment only.
It is considered a best practice to enable the capacity warning for your MDGs. The range used should be addressed in the planning phase of the SVC installation, though this range can always be changed without interruption.
3. If there are MDisks and VDisks within the MDG you are deleting, you are required to click Forced delete for the MDG (Figure 8-33 on page 488).
487
Important: If you delete an MDG with the Forced Delete option, and VDisks were associated with that MDisk group, you will lose the data on your VDisks, since they are deleted before the MDisk Group. If you want to save your data, migrate or mirror the VDisks to another MDisk group before you delete the MDisk group previously assigned to it.
2. From the Adding Managed Disks to Managed Disk Group MDiskname window (where MDiskname is the MDG you selected in the previous step), select the desired MDisk or MDisks from the MDisk Candidates list (Figure 8-35). After you select all the desired MDisks, click OK.
488
2. From the Deleting Managed Disks from Managed Disk Group MDGname window (where MDGname is the MDG you selected in the previous step), select the desired MDisk or MDisks from the list (Figure 8-37). After you select all the desired MDisks, click OK.
489
3. If VDisks are using the MDisks that you are removing from the MDG, you are required to click the Forced Delete button to confirm the removal of the MDisk, as shown in Figure 8-38. 4. An error messages is displayed if there is not sufficient space to migrate the VDisk data to other extents on other MDisks that MDG.
490
Note: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem are properly assigned to the SVC (for example, using storage partitioning with a DS4000) and that appropriate zoning is in place (for example, the SVC can see the disk subsystem).
2. You now see a subset (specific to the MDG you chose in the previous step) of the Viewing Managed Disk window (Figure 8-41) shown in 8.2.4, Managed disks on page 479.
491
Note: Remember, you can collapse the column entitled My Work at any time by clicking the arrow to the right of the My Work column heading.
2. You see a subset (specific to the MDG you chose in the previous step) of the Viewing Virtual Disks window in Figure 8-43. We cover the Viewing Virtual Disks window in more detail in VDisk information on page 504.
492
You have now completed the tasks required to manage the disk controller systems, managed disks, and MDGs within the SVC environment.
493
b. You can click the Port Details (Figure 8-46) link to see attachment information such as the WWPNs or IQN defined for this host.
c. You can click Mapped I/O Groups (Figure 8-47) to see which I/O groups this host can access.
494
d. As stated before, a new feature in SVC 5.1 is that we can now create hosts that are either using Fibre Channel connections or iSCSI connections. If you select iSCSI for our host in this example we will not see any iSCSI parameters (as shown in Figure 8-48) since this host has already got an FC port configured as shown in Figure 8-46 on page 494.
When you are finished viewing the details, click Close to return to the previous window.
495
2. In the Creating Hosts window (Figure 8-50 on page 497), type a name for your host (Host Name). Note: If you do not provide a name, the SVC automatically generates the name hostX (where X is the ID sequence number assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number or the word host, because this prefix is reserved for SVC assignment only. Although using an underscore might work in some circumstances, it violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and can cause problems. So we recommend that you do not use the underscore in host names. 3. Select the mode (Type) for the host. Default is Generic and should be used for all host except if you are using HP-UX or SUN then you need to select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. 4. Connection type is either Fibre Channel or iSCSI. If you select Fibre Channel you are asked for the port mask and WWPN of the server you are creating. If you select iSCSI you are asked for the iSCSI initiator, commonly called IQN and CHAP authentication secret to ensure authentication of the target host and volume access. 5. You can use a port mask to control the node target ports that a host can access. The port mask applies to logins from the host initiator port that are associated with the host object. Note: For each login between a host HBA port and node port, the node examines the port mask that is associated with the host object for which the host HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). The right-most bit in the mask corresponds to the lowest numbered SVC port (1 not 4) on a node. As shown in Figure 8-50 on page 497, our port mask is 1111; this means that the host HBA port can access all node ports. If, for example, a port mask is set to 0011, only port 1 and port 2 are enabled for this host access.
496
6. Select and add the worldwide port names (WWPNs) that correspond to your HBA or HBAs. Click OK. In some cases, your WWPNs might not be displayed, although you are sure that your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. In this case, you can manually type the WWPN of your HBA or HBAs into the Additional Ports field (type in WWPNs, one per line) at the bottom of the window and mark Do not validate WWPN before you click OK.
This brings you back to the viewing host window (Figure 8-51) where you can see the newly added host.
497
Prior to starting to use iSCSI possibility we need to configure our cluster to use the iSCSI option. This is shown in iSCSI attached hosts on page 497. When creating an iSCSI attached host from the Welcome window select Working with hosts and from there we select Hosts. From the drop-down list we select Create a Host as show in Figure 8-52 on page 498.
In the Creating Hosts window (Figure 8-53), type a name for your host (Host Name). 1. Select the mode (Type) for the host. Default is Generic and should be used for all hosts except if you are using HP-UX or SUN then you need to select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. 2. Connection type is iSCSI. 3. iSCSI initiator or IQN is iqn.1991-05.com.microsoft:freyja. This is obtained from the server and has in general the same purpose as the WWPN. 4. The CHAP secret is the authentication method used to restrict access for other iSCSI hosts to use the same connection. The CHAP can be set for the whole cluster under cluster properties or for each host definition. The CHAP needs to be identical on the server and the cluster/host definition. It is possible to create an iSCSI host definition without using a CHAP. In Figure 8-53 we set the parameters for our host called Freyja.
498
2. From the Modifying Host window (Figure 8-55), type the new name you want to assign or change the Type parameter and click OK.
499
Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length. If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number or the word host, because this prefix is reserved for SVC assignment only. While using an underscore might work in some circumstances, it violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and thus can cause problems. So we recommend that you do not use the underscore in host names.
2. In the Deleting Host hostname window (where hostname is the host you selected in the previous step), click OK if you are sure you want to delete the host. See Figure 8-57.
500
3. If you still have VDisks associated with the host, you will see a window (Figure 8-58) requesting confirmation for the forced deletion of the host. Click OK and all the mappings between this host and its VDisks are deleted before the host is deleted.
2. From Adding ports you can select if you are adding a Fibre Channel port (WWPN) or an iSCSI (IQN initiator). Select either the desired WWPN from the Available Ports list and click Add, or enter the new IQN in the iSCSI window. After adding the WWPN or IQN click OK. See Figure 8-60 on page 502. If your WWPNs are not in the list of the Available Ports and you are sure your adapter is functioning (for example, you see WWPN in the switch name server) and your zones are correctly set up, then you can manually type the WWPN of your HBAs into the Add Additional Ports field at the bottom of the window before you click OK.
501
Figure 8-61 on page 502 shows where IQN is added to our host called Thor.
502
2. On the Deleting Ports From hostname window (where hostname is the host you selected in the previous step), start by selecting the connection type of the port you want to delete. If you select Fibre Channel port, you select the port you want to delete from the Available Ports list and click Add. When you have selected all the ports you want to delete from your host to the column to the right, click OK. If you selected connection type as iSCSI you select the ports from the available iSCSI initiator and click Add. Figure 8-63 on page 503 show how we select WWPN port to delete and in Figure 8-64 we have selected an iSCSI initiator to delete.
3. If you have VDisks that are associated with the host, you will receive a warning about deleting a host port. You need to confirm your action when prompted, as shown in Figure 8-65. A similar warning messages appears if you are deleting an iSCSI port.
503
504
1. In the Viewing Virtual Disks window, click the underlined name of the desired VDisk in the list. 2. The next window (Figure 8-67) that opens shows detailed information. Review the information. When you are done, click Close to return to the Viewing Virtual Disks window.
505
a. Choose what type of VDisk you want to create, striped or sequential. b. Select the cache mode, Read/Write or None. c. If you want, enter a unit device identifier. d. Enter the number of VDisks you want to create e. You can select the Space-efficient or Mirrored Disk check box, which will expand their respective sections with extra options. f. Optionally, format the new VDisk by selecting the Format VDisk before use check box (write zeros to its managed disk extents). g. Click Next.
5. Select the MDG from which you want the VDisk to be a member of. a. If you selected Striped, you will see the window shown in Figure 8-70. You must select the MDisk group, and then the Managed Disk Candidates window will appear. You can optionally add some MDisks to be striped.
506
b. If you selected Sequential mode, you will see the window shown in Figure 8-71. You must select the MDisk group, and then Managed Disks will appear. You need to choose at least one MDisk as a managed disk.
Figure 8-71 Creating a VDisk wizard: Select attributes for sequential mode VDisks
c. Enter the size of the VDisk you want to create and select the capacity measurement (MB or GB) from the list. Note: An entry of 1 GB uses 1024 MB. d. Click Next. 6. You can enter the VDisk name if you want to create just one VDisk, or the naming prefix if you want to create multiple VDisks. Click Next. Tip: When you create more than one VDisk, the wizard will not ask you for a name for each VDisk to be created. Instead, the name you use here will be a prefix and have a number, starting at zero, appended to it as each one is created.
507
Note: If you do not provide a name, the SVC automatically generates the name VDiskX (where X is the ID sequence number assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length, but cannot start with a number or the word VDisk, because this prefix is reserved for SVC assignment only. 7. In the Verify VDisk window (see Figure 8-73 for striped and Figure 8-74 on page 509 for sequential), check if you are satisfied with the information shown, then click Finish to complete the task. Otherwise, click Back to return and make any corrections.
508
8. Figure 8-75 on page 509 shows the progress of the creation of your VDisks on storage and the final results.
509
In this section, we are going to create a space-efficient VDisk (SEV disk) step-by-step. This will allow you to create VDisks with much higher capacity than is physically available (this is called thin provisioning). As the host using this VDisk starts utilizing up to the level of the real allocation, the SVC can dynamically grow (when you enable auto expand) until it reaches the virtual capacity limit or the Managed Disk Group physically runs out of free space. For the latter scenario, this will cause the growing VDisk to go offline, affecting the host using that VDisk. Therefore, enabling threshold warnings is important and recommended. Perform the following steps to create a Space-Efficient VDisk with auto expand: 1. Select Create a VDisk from the list (Figure 8-66 on page 504) and click Go. 2. The Create Virtual Disks wizard launches. Click Next. 3. The Select groups window opens. Choose an I/O group and then a preferred node (see Figure 8-76 on page 510). In our case, we let the system choose. Click Next.
4. The Set attributes window opens (Figure 8-69 on page 506). a. Choose what type of VDisk you want to create, striped or sequential. b. Select the cache mode, Read/Write or None. c. If you want, enter a unit device identifier. d. Enter the number of VDisks you want to create e. Select the Space-efficient check box, which will expand this section with the following options: i. Type the size of the VDisk Capacity (remember, this is the virtual size). ii. Type in a percentage or select a specific size for the usage threshold warning. iii. Select the Auto expand check box. This will allow the real disk size to grow as required. iv. Select the Grain size (choose 32 KB normally, but match the FlashCopy grain size, which is 256 KB, if the VDisk is being used for FlashCopy). f. Optionally, format the new VDisk by selecting the Format VDisk before use check box (write zeros to its managed disk extents) g. Click Next.
510
5. In the window, Select MDisk(s) and Size for a <modetype>-Mode VDisk, as shown in Figure 8-78 on page 511, and follow these steps: a. Select the Managed Disk Group from the list. b. Optionally, choose the Managed Disk Candidates upon which to create the VDisk. Click Add to move them to the Managed Disks Striped in this Order box. c. Type in the Real size you wish to allocate. This is how much disk space will actually be allocated. It can either be a percentage of the virtual size or a specific number.
6. In the window Name the VDisk(s) (Figure 8-79 on page 512), type a name for the VDIsk you are creating. In our case, we used vdisk_sev2. Click Next.
511
7. In the Verify Attributes window (Figure 8-80), verify the selections. We can select the Back button at any time to make changes.
8. After selecting the Finish option, we are presented with a window (Figure 8-81 on page 513) that tells us the result of the action.
512
If the VDisk is currently assigned to a host, you receive a secondary message where you must click Forced Delete to confirm your decision. See Figure 8-83. This deletes the VDisk-to-host mapping before deleting the VDisk. Important: Deleting a VDisk is a destructive action for user data residing in that VDisk.
513
a. Select the new size of the VDisk. This is the increment to add. For example, if you have a 5 GB disk and you want it to become 10 GB, you specify 5 GB in this field. b. Optionally, select the managed disk candidates from which to obtain the additional capacity. The default for a striped VDisk is to use equal capacity from each MDisk in the MDG. Notes: With sequential VDisks, you must specify the MDisk from which you want to obtain space. There is no support for the expansion of image mode VDisks. If there are not enough extents to expand your VDisk to the specified size, you receive an error message. If you are using VDisk mirroring, all copies must be synchronized before expanding. c. Optionally, you can format the extra space with zeros by selecting the Format Additional Managed Disk Extents check box. This does not format the entire VDisk, just the newly expanded space. When you are done, click OK.
515
1. From the SVC Welcome screen (Figure 8-1 on page 470), select the Work with Virtual Disks option and then the Virtual Disks link. In the Viewing Virtual Disks window (Figure 8-86), from the drop-down menu, select Map VDisks to a host from the list and click Go.
2. In the window Creating Virtual Disk-to-Host Mappings (Figure 8-87), select the target host. We have the option to specify the SCSI LUN ID. (This field is optional. Use this field to specify an ID for the SCSI LUN. If you do not specify an ID, the next available SCSI LUN ID on the host adapter is automatically used.) Click OK.
3. You are presented with an information window that displays the status, as shown in Figure 8-88.
516
4. You now return to the Viewing Virtual Disks window (Figure 8-86 on page 516). You have now completed all the tasks required to assign a VDisk to an attached host and they are ready for use by the host.
517
setting an I/O governing throttle based on MBs per second does not achieve much. It is better for you to use an I/O per second throttle. At the other extreme, a streaming video application generally issues a small amount of I/O, but transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle based on I/Os per second does not achieve much. Therefore, you should use an MB per second throttle. Additionally, you can specify a unit device identifier. The Primary Copy is used to select which VDisk copy is going to be used as the preferred copy for read operations. Mirror Synchronization rate is the I/O governing rate in percentage during initial synchronization. A zero value disables synchronization. The Copy ID section is used for space-efficient VDisks. If you only have a single space-efficient VDisk, the Copy ID drop-down will be greyed out and you can change the warning thresholds and whether the copy will autoexpand. If you have a VDisk mirror and one, or more, of the copies are space-efficient, you can select a copy, or all copies, and change the warning thresholds/autoexpand individually.
518
a. Select the MDG to which you want to reassign the VDisk. You will only be presented with a list of MDisk groups with the same extent size. b. Specify the number of threads to devote to this process (a value from 1 to 4). The optional threads parameter allows you to assign a priority to the migration process. A setting of 4 is the highest priority setting. If you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1. Important: After a migration is started, there is no way to stop it. Migration continues until it is complete unless it is stopped or suspended by an error condition or the VDisk being migrated is deleted. When you have finished making your selections, click OK to begin the migration process. 3. You need to manually refresh your browser or close it and return to the Viewing Virtual Disks window periodically to see the MDisk Group Name column in the Viewing Virtual Disks window update to reflect the new MDG name.
519
Figure 8-91 Migrate to image mode VDisk wizard: Select the Target MDisk
4. Select the MDG the MDisk will join (Figure 8-92). Click Next.
5. Select the priority of the migration by selecting the number of threads (Figure 8-93). Click Next.
Figure 8-93 Migrate to image mode VDisk wizard: Select the Threads
6. Verify that the information you specified is correct (Figure 8-94). If you are satisfied, click Finish. If you want to change something, use the Back option.
520
Figure 8-94 Migrate to image mode VDisk wizard: Verify Migration Attributes
7. Figure 8-95 on page 521 displays the details of the VDisk that you are migrating.
521
1. Select a VDisk from the list, choose Add a Mirrored VDisk Copy from the drop-down list (see Figure 8-66 on page 504), and click Go. 2. The Add Copy to VDisk VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens. See Figure 8-96 on page 522. You can perform the following steps separately or in combination: a. Choose what type of VDisk Copy you want to create, striped or sequential. b. Select the Managed Disk Group you want to put the copy in. We recommend that you choose a different group to maintain higher availability. c. Select the Select MDisk(s) manually button, which will expand the section that has a list of MDisks that are available for adding. d. Choose the Mirror synchronization rate. This is the I/O governing rate in percentage during initial synchronization. A zero value disables synchronization. You can also select Synchronized, but this should be used only when the VDisk has never been used or is going to be formatted by the host. e. You can make the copy to be space-efficient. This section will expand, giving you options to allocate the virtual size, warning thresholds, autoexpansion, and Grain size. See 8.5.4, Creating a space-efficient VDisk with auto-expand on page 509 for more information. f. Optionally, format the new VDisk by selecting the Format the new VDisk copy and mark the VDisk synchronized check box. Use this option with care, because if the primary copy goes offline, you may not have the data replicated on the other copy. g. Click OK.
You can monitor the MDisk copy synchronization progress from the Manage Progress menu option and then the View Progress link.
522
2. In the window Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0), shown in Figure 8-98, follow these steps: a. Select the managed disk group from the list. b. Type the capacity of the VDisk. Select the unit of capacity from the list. c. Click Next.
523
Figure 8-98 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0) window.
3. In the window Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1), shown in Figure 8-99, select a managed disk group for Copy 1 of the mirror. This can be defined within the same or on a different MDG. Click Next.
Figure 8-99 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window
4. In the window Name the VDisk(s) (Figure 8-100 on page 524), type a name for the virtual disk you are creating. In this case, we used MirrorVDisk1. Click Next.
524
5. In the Verify Mirrored VDisk window (Figure 8-101), verify the selections. We can select the Back button at any time to make changes.
6. After selecting the Finish option, we are presented with the window shown in Figure 8-102, which tells us the result of the action.
We click Close again and by clicking on our newly created VDisk we can see the more detailed information about that VDisk as shown in (Figure 8-103).
525
526
Important: You can create an image mode VDisk only by using an unmanaged disk, that is, you must do this before you add the MDisk that corresponds to your original logical volume to a Managed Disk Group. To create an image mode VDisk, perform the following steps: 1. From the My work panel on the left side of your GUI select, work with virtual disks. 2. From the Work with Virtual disk select Virtual Disks. 3. From the drop down menu select Create Image Mode VDisk 4. From the overview for creation of Image Mode Vdisk select Next. 5. The Set attributes window should appear (Figure 7 on page 528), where you enter the name of the VDisk you want to create.
6. You can also select whether you want to have read and write operations stored in cache by specifying a cache mode. Additionally, you can specify a unit device identifier. You can optionally choose to have it as a mirrored or Space-efficient VDisk. Click Next to continue. Attention: You must specify the cache mode when you create the VDisk. After the VDisk is created, you cannot change the cache mode. a. We describe the VDisk cache modes in Table 8-1.
Table 8-1 VDisk cache modes Read/Write None All read and write I/O operations that are performed by the VDisk are stored in cache. This is the default cache mode for all VDisks. All read and write I/O operations that are performed by the VDisk are not stored in cache.
527
Note: If you do not provide a name, the SVC automatically generates the name VDiskX (where X is the ID sequence number assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, a dash, and the underscore. It can be between one and 15 characters in length, but cannot start with a number, a dash, or the word VDisk, because this prefix is reserved for SVC assignment only. 7. Next you choose your MDisk to use for your Image Mode VDisk as shown in Figure 8-105.
Figure 8-105 Select your MDisk to use for your Image Mode VDisk
8. Next select your preferred I/O group or have the system choose for you, as show in Figure 8-106.
9. Figure 8-107 shows you the characteristics of the new image VDisk. Click Finish to complete this task.
528
You can now map the newly created VDisk to your host.
529
5. Figure 8-109 enables you to choose on which of the available MDisks your Copy 0 and Copy 1 will be stored. Notice that we have selected a second MDisk that is larger than the original. Click Next to proceed.
6. Now you can optionally select an I/O group and preferred node, and you can select an MDG for each of the MDisk copies, as shown in Figure 8-110. Click Next to proceed.
530
Figure 8-110 Choose an I/O group and an MDG for each of the MDisk copies
7. Figure 8-111 on page 531 shows you the characteristics of the new image mode VDisk. Click Finish to complete this task.
You can monitor the MDisk copy synchronization progress by selecting the Manage Progress option and then the View Progress link, as shown in Figure 8-112.
531
You have the option of assigning the VDisk to the host or waiting until it is synchronized and, after deleting the MDisk mirror Copy 1, map the MDisk copy to the host.
532
You can monitor the VDisk copy synchronization progress from the Manage Progress menu option and then the View Progress link, as shown in Figure 8-114 on page 534.
533
2. Figure 8-116 displays both copies of the VDisk mirror. Select the radio button of the original copy (Copy ID 0) and click OK.
534
The VDisk is now a single Space-efficient VDisk copy. To migrate an SE VDisk to a fully allocated VDisk, follow the same scenario, but add a normal (fully allocated) VDisk as the second copy.
535
This new VDisk is available to be mapped to a host. Note: Once you split a VDisk mirror, you cannot resynchronize or recombine them. You must create a VDisk copy from scratch.
536
1. Perform any necessary steps on your host to ensure that you are not using the space you are about to remove. 2. Select the radio button to the left of the VDisk you want to shrink (Figure 8-66 on page 504). Select Shrink a VDisk from the list and click Go. 3. The Shrinking Virtual Disks VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens, as shown in Figure 8-118. In the Reduce Capacity By field, enter the capacity you want to reduce. Select B, KB, MB, GB, TB, or PB. The final capacity of the VDisk is the Current Capacity minus the capacity that you specify. Note: Be careful with the capacity information. The Current Capacity field shows it in MBs, while you can specify a capacity to reduce in GBs. SVC calculates 1 GB as being 1024 MB. When you are done, click OK. The changes should become visible on your host.
537
For information about what you can do in this window, see 8.2.4, Managed disks on page 479.
538
Figure 8-123 shows you the total MDisk capacity, the space in the MDGs, the space allocated to the VDisks, and the total free space.
539
2. Now you can see which host that VDisk belongs to. If this is a long list, you can use the Additional Filtering and Sort option from 8.7.1, Organizing on screen content on page 543.
2. Now you are back at the window shown in Figure 8-124. Now you can assign this VDisk to another Host, as described in 8.5.8, Assigning a VDisk to a host on page 515. You have now completed the tasks required to manage VDisks within an SVC environment.
540
The unmanaged MDisks (SSD drives) are owned by the internal controllers. When these MDisks are added to an MDG it is recommended that a dedicated MDG is created for the SSD drives. When those MDisks are added to an MDG they will become managed and will be treated as any other MDisks in an MDG. If we look closer at one of the selected controllers, as shown in Figure 8-127 on page 541, we can verify the SVC node that owns this controller, and that this is an internal SVC controller.
We can now check what MDisks (sourced from our SSD drives) are provisioned from that controller as shown in Figure 8-128.
541
From this view we can see all the relevant information such as the status, the MDG and the size. To see more detailed information about a single MDisk (single SSD), we do this by clicking on a single MDisk and we will see the information as shown in Figure 8-129 on page 542.
Notice the controller id (6) this is an identifier for the internal controller type. When you have your SSDs in full operation and you would like to see the VDisks that are using your SSDs, the easiest way is to locate the MDG that contains your SSD drives as MDisks, and select Show VDisks Using This Group as shown in Figure 8-130.
This displays the VDisks that are using your SSD drives.
542
General housekeeping
If at any time the content in the right side of the frame is abbreviated, you can collapse the My Work column by clicking the icon at the top of the My Work column. When collapsed, the small arrow changes from pointing to the left to pointing to the right ( ). Clicking the small arrow that points right expands the My Work column back to its original size. In addition, each time you open a configuration or administration window using the GUI in the following sections, it creates a link for that window along the top of your Web browser beneath the main banner graphic. As a general housekeeping task, we recommend that you close each window when you finish using it by clicking the icon to the right of the window name, but below the icon. Be careful not to close the entire browser.
543
544
1. From the SVC Welcome screen, select the Manage Cluster option and the Modify IP Addresses link. 2. The Modify IP Addresses window (Figure 8-133) opens.
Select the port you want to modify and select Modify Port Setting and click GO. Notice that you can configure both ports on the SVC node as shown in Figure 8-134.
545
3. You advance to the next window, which shows a message indicating that the IP addresses were updated. You have now completed the tasks required to change the IP addresses (cluster, service, gateway, and master console) for your SVC environment.
3. Although it does not state the current status, clicking OK turns on the statistics collection. To verify, click the Cluster Properties link, as you did in 8.8.1, Viewing cluster properties on page 544. Then click the Statistics link. You see the interval as specified in Step 2 and the status of On, as shown in Figure 8-137.
546
You have now completed the tasks required to start statistics collection on your cluster.
3. The window closes. To verify that the collection has stopped, click the Cluster Properties link, as you did in 8.8.1, Viewing cluster properties on page 544. Then click the Statistics link. Now you see the status has changed to Off, as shown in Figure 8-139 on page 548.
547
You have now completed the tasks required to stop statistics collection on your cluster.
8.8.6 iSCSI
From the View Cluster Properties screen we can select the iSCSI overview and this shows if you have configured SNS sever, CHAP and what authentication is supported. This is shown in Figure 8-141.
548
549
You have now completed the tasks necessary to configure an NTP server and set the cluster time zone and time.
550
Note: When a node shuts down due to loss of power, it will dump the cache to an internal hard drive so the cache data can be retrieved when the cluster starts. With the 8F2/8G4 nodes, the cache is 8 GB, and as such, it can take several minutes to dump to the internal drive. SVC UPSs are designed to survive at least two power failures in a short time, before nodes will refuse to start until the batteries have sufficient power (to survive another immediate power failure). If, during your maintenance activities, the UPS detected power and power-loss more than once (and thus the nodes start and shut down more than once in a short time frame), you might find that you have unknowingly drained the UPS batteries, and have to wait until they are charged sufficiently before the nodes will start. Perform the following steps to shut down your cluster: Important: Before shutting down a cluster, you should quiesce all I/O operations that are destined for this cluster because you will lose access to all VDisks being provided by this cluster. Failure to do so might result in failed I/O operations being reported to your host operating systems. There is no need to do this if you are only shutting down one SVC node. Begin the process of quiescing all I/O to the cluster by stopping the applications on your hosts that are using the VDisks provided by the cluster. If you are unsure which hosts are using the VDisks provided by the cluster, follow the procedure in 8.5.22, Showing the host to which the VDisk is mapped on page 538, and repeat this for all VDisks. 1. From the SVC Welcome screen, select the Manage Cluster option and the Shut Down Cluster link. 2. The Shutting Down cluster window (Figure 8-143) opens. You will get a message asking you to confirm whether you want to shut down the cluster. Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data migration operations, and forced deletions before continuing. Click Yes to begin the shutdown process. Note: At this point, you will lose administrative contact with your cluster.
You have now completed the tasks required to shut down the cluster. Now you can shut down the uninterruptible power supplies by pressing the power buttons on their front panels.
551
Tip: When you shut down the cluster, it will not automatically start, and will have to be manually started. If the cluster shuts down because the UPS has detected a loss of power, it will automatically restart when the UPS has detected the power has been restored (and the batteries have sufficient power to survive another immediate power failure).
Note: To restart the SVC cluster, you must first restart the uninterruptible power supply units by pressing the power buttons on their front panels. After they are on, go to the service panel of one of the nodes within your SVC cluster and press the power on button, releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the SVC front panel), you can start the other nodes in the same way. As soon as all nodes are fully booted and you have re-established administrative contact using the GUI, your cluster is fully operational again.
552
Role All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, settime All svcinfo commands. svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser svcconfig: backup
User For those that control all copy functionality of the cluster.
Service
Is for those that perform service maintenance and other hardware tasks on the cluster.
Monitor
The superuser user is a built-in account that has the Security Admin user role permissions. You cannot change permissions or delete this account, only change the password, and this password can also be changed manually on the front panel of the cluster nodes.
553
Towards the top of the screen you can see the name of the user you are modifying and we enter our new password as shown in Figure 8-145.
2. Select User from My work window, and select Create a User from the drop down menu, as shown in Figure 8-147.
3. Enter a name for your user and the desired password. Since we are not connected to an LDAP server we select local authentication and we can therefore choose which user group our user belongs to. In our scenario we are creating a user for SAN administrative purposes, and it is therefore appropriate to add this user to the Administrator group. We
554
attach the SSH key as well so a CLI session can be opened as well. We view the attributes as shown in Figure 8-148 on page 555.
555
2. You have the option of changing the password, assigning a new role or changing the SSH key for the given user name. Click OK (Figure 8-151).
2. Click OK to confirm that you want to delete the user, as shown in Figure 8-153.
556
Here we have several options for our user group and we find detailed information about the groups available. In Figure 8-155 we can see the options and these are the same options we are presented with when we select Modify User group.
557
We have now completed the tasks required to create, modify, and delete a user and user groups within the SVC cluster.
558
3. On the Renaming I/O Group window (I/O Group Name is the I/O group you selected in the previous step), type the New Name you want to assign to the I/O group. Click OK, as shown in Figure 8-159. Our new name is IO_grp_SVC02.
Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and the underscore. It can be between one and 15 characters in length, but cannot start with a number, the dash, or the word iogrp, because this prefix is reserved for SVC assignment only. SVC also uses io_grp as a reserve word prefix. A node name cannot therefore be changed to io_grpN where N is a numeric; however, io_grpNy or io_grpyN, where y is any non-numeric character used in conjunction with N, is acceptable. We have now completed the tasks required to rename an I/O group.
559
2. The Viewing cluster window appears, as shown in Figure 8-161. This window contains several useful links and information: My Work (top left), the GUI version and build level information (right, under the main graphic), and a hypertext link to the SVC download page: http://www.ibm.com/storage/support/2145 3. On the Viewing Clusters window (Figure 8-161), select the radio button next to the cluster on which you want to perform actions (in our case, ITSO_CLS3) click Go.
4. The SAN Volume Controller Console Application launches in a separate browser window (Figure 8-162). In this window, as with the Welcome screen, you can see several links under My Work (top left), a Recent Tasks list (bottom left), the SVC Console version and build level information (right, under main graphic), and a hypertext link that will take you to the SVC download page: http://www.ibm.com/storage/support/2145 Under My Work, click the Work with Nodes option and then the Nodes link.
560
5. The Viewing Nodes window (Figure 8-163) opens. Note the input/output (I/O) group name (for example, io_grp0). Select the node you want to add. Ensure that Add a node is selected from the drop-down list and click Go.
Note: You can rename the existing node to your own naming convention standards (we show how to do this later). In your window, it should appear as node1 by default. 6. The next window (Figure 8-164) displays the available nodes. Select the node from the Available Candidate Nodes drop-down list. Associate it with an I/O group and provide a name (for example, SVCNode2). Click OK.
561
Note: If you do not provide a name, the SVC automatically generates the name nodeX, where X is the ID sequence number assigned by the SVC internally. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. It can be between one and 15 characters in length, but cannot start with a number or the word node, since this prefix is reserved for SVC assignment only. In our case, we only have enough nodes to complete the formation of one I/O group. Therefore, we added our new node to the I/O group that node1 was already using, namely io_grp0 (you can rename from the default of iogrp0 using your own naming convention standards). If this window does not display any available nodes (indicated by the message CMMVC1100I There are no candidate nodes available), check if your second node is powered on and that zones are appropriately configured in your switches. It is also possible that a pre-existing clusters configuration data is stored on it. If you are sure this node is not part of another active SVC cluster, use the service window to delete the existing cluster information. When this is complete, return to this window and you should see the node listed. 7. Return to the Viewing Nodes window (Figure 8-165). It shows the status change of the node from Adding to Online.
562
Note: This window does not automatically refresh. Therefore, you continue to see the Adding status only until you click the Refresh button.
We have completed the cluster configuration and now you have a fully redundant SVC environment.
563
We start by selecting Work with nodes from our Welcome Window and select Node Ethernet Ports as shown in Figure 8-167 on page 564.
We can see that we have four (two per node) connections to use and they are all physically connected with a 100 Mb link but they are not configured yet. From the drop-down menu we select Configure a Node Ethernet Port and insert the IP that we intend to use for iSCSI as shown in Figure 8-168.
We can now see that one of our ethernet ports is now configured and online as shown in Figure 8-169. We do the same for the three remaining IP addresses.
564
We do the same for the remaining ports and use a unique IP for each port. When we have done that all our ethernet ports are configured as shown in Figure 8-170 on page 565.
Now both physical ports on each node are configured for iSCSI. We can see the iSCSI identifier (iSCSI name) for our SVC node by selecting Working with nodes from our Welcome window and then select Nodes, under the column iSCSI Name we see our iSCSI identifier as shown in Figure 8-171. Each node has a unique iSCSI name associated with two IP adresses. After the host has initiated the iSCSI connection to a target node, this IQN from the target node should be visible in the iSCSI configuration tool on the host.
We also have the possibility to enter an iSCSI alias name for our iSCSI name on the node itself as shown in Figure 8-172.
565
We change the name to something easier to recognize as show in Figure 8-173 on page 566
566
Then, from the drop-down menu, select Create a Consistency Group and c lick Go; this can be seen in Figure 8-175.
567
Note: If we choose to use the Automatically Delete Consistency Group When Empty feature, then we can only use this consistency group for mappings that are marked for auto deletion.The non-autodelete consistency group is allowed to contain autodelete FlashCopy mappings and non-autodelete FlashCopy mappings Click Close when the new name has been entered and the result should be shown as in Figure 8-177.
Repeat the previous steps to create another FlashCopy consistency group (Figure 8-178 on page 568). The FlashCopy consistency groups are now ready to use.
568
We are then presented with the FlashCopy creation wizard overview of the creation process for a FlashCopy mapping, and we click Next to proceed. We name the first FlashCopy mapping PROD_1, select the previously created consistency group FC_SIGNA, set the background copy priority to 50 and the Grain Size to 64, and click Next to proceed, as shown in Figure 8-180.
The next step is to select the source VDisk. If there were many source VDisks that were not already defined in a FlashCopy mapping, then we can filter that list here. In Figure 8-181, we define the filter * (which will show us all our VDisks) for the source VDisk and click Next to proceed.
569
We select Galtarey_01 from the available VDisks, as our source disk and click Next to proceed (Figure 8-182). The next step is to select our target VDisk. The FlashCopy mapping wizard will only present a list of VDisks that are the same size as the source VDisks and not already in a FlashCopy mapping or defined in a Metro Mirror relationship. In Figure 8-182, we select the target Hrappsey_01 and click Next to proceed.
In the next step, we select an I/O group for this mapping). Finally, we verify our FlashCopy mapping (Figure 8-183 on page 571) and click Finish to create it.
570
We repeat the procedure to create other FlashCopy mappings on the second FlashCopy target VDisk of Galtarey_01. We give it a different FlashCopy mapping name and choose a different FlashCopy consistency group, as shown in Figure 8-185 on page 572. As you can see in this example, we changed the background copy rate to 30, which will slow down the background copy process. The clearing rate of 60 will extend the stopping process if we had to stop the mapping during a copy process. An incremental mapping copies only parts of the source or target VDisk that have changed since the last FlashCopy process. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process will copy all data from the source to the target VDisk.
571
Note: If no consistency group is defined, the mapping will be a standalone mapping and can be prepared and started without impacting other mappings. All mappings in the same consistency group need to have the same status to maintain the consistency of the group. In Figure 8-186, you can see that Galtarey_01 is still available.
We select Hrappsey_02 as the destination VDisk, as shown in Figure 8-187 on page 573.
572
On the final page of the wizard, as shown in Figure 8-188, we select Finish after verifying all the parameters.
We confirm the parameter settings by clicking Finish, as shown in Figure 8-188. The background copy rate specifies the priority that should be given to complete the copy. If 0 is specified, the copy will not proceed in the background. A default value is 50. Tip: FlashCopy can be invoked from the SVC GUI, but this might not make much sense if you plan to handle a large number of FlashCopy mappings or consistency groups periodically, or at varying times. In this case, creating a script by using the CLI may be more convenient.
573
If you only select one mapping to be prepared, the cluster will ask if you want all the volumes in that consistency group to be prepared as show in Figure 8-189.
When you have assigned several mappings to a FlashCopy consistency group, you only have to issue a single prepare command for the whole group, to prepare all the mappings at once. We select the FlashCopy consistency group and Prepare a consistency group from the action list and click Go. The status will go to Preparing, and then finally to Prepared. Press the Refresh button several times until it is in the Prepared state. Figure 8-190 shows how we check the result. The status of the consistency group has changed to Prepared.
Because we have already prepared the FlashCopy mapping, we are ready to start the mapping right away. Notice that this mapping is not a member of any consistency group. An 574
SAN Volume Controller V5.1
overview message with information about the mapping we are about to start is shown in Figure 8-192, and we select Start to start the FlashCopy mapping.
After we have selected Start we are automatically shown the copy process view that shows the progress of our copy mappings.
In Figure 8-194, we are prompted to confirm starting the FlashCopy consistency group. We now flush the database and OS buffers and quiesce the database, then click OK to start the FlashCopy consistency group. Note: Since we have already prepared the FlashCopy consistency group, this option is grayed out when you are prompted to confirm starting the FlashCopy consistency group.
575
As shown in Figure 8-195 on page 576, we verified that the consistency group is in the Copying state, and subsequently, we resume the database I/O.
When the background copy is completed for all FlashCopy mappings in the consistency group, the status is changed to Idle or Copied.
576
Note: Stopping a FlashCopy mapping should only be done when the data on the target VDisk is of no use, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC. As shown in Figure 8-197, we stop the FC_DONA consistency group. All mappings belonging to that consistency group are now in the Copying state.
We select the FlashCopy consistency group FC_DONA and from the drop down menu we select Stop Mapping as shown in Figure 8-198.
When selecting what method to use to stop the mapping we have the three options as shown in Figure 8-199.
577
Since we want to stop the mapping right away we select the Forced Stop and the status of the FlashCopy consistency groups goes from Copying Stopping Stopped, as shown in Figure 8-200.
Or if the option has not been selected initially you can delete the mapping manually as shown in Figure 8-202 on page 579.
578
We can do this even if the consistency group has a status Copying as shown in Figure 8-204.
And since there is an active mapping with the state of Copying we will get a warning message as shown in Figure 8-205 on page 580.
579
Splitting a cascaded FlashCopy mapping allows the source target of a map which is 100% complete to be removed from the head of the cascade when the map is stopped. 580
SAN Volume Controller V5.1
For example, if we have four VDisks in a cascade (A -> B -> C -> D), and the map A-> B is 100% complete, as shown in Figure 8-207 on page 581, then clicking Split Stop as shown in Figure 8-208 will result in FCMAP_AB becoming idle_copied and the remaining cascade will become B->C->D.
Without the split option, VDisk A would remain at the head of the cascade (A -> C -> D). Consider this sequence of steps: User takes a backup using the mapping A-> B. A is the production VDisk; B is a backup. At some later point the user suffers corruption on A, and so reverses the mapping B -> A. The user then takes another backup from the production disk A, and so has the cascade B-> A -> C. Stopping A-> B without using the Split Stop option will result in the cascade B->C. Note that the backup disk B is now at the head of this cascade. When the user next wants to take a backup to B, they can still start mapping A->B (using the -restore flag), but they cannot then reverse the mapping to A (B->A or C-> A). Stopping A-> B with Split Stop would have resulted in the cascade A -> C. This does not result in the same problem because the production disk A is at the head of the cascade instead of the backup disk B.
581
582
Important: All SVC clusters must be at level 5.1 or above. In the following scenario, we will be setting up an intercluster Metro Mirror relationship between the SVC cluster ITSO-CLS1 at the primary site and the SVC cluster ITSO-CLS2 at the secondary site. Details of the VDisks are shown in Table 8-3.
Table 8-3 VDisk details Content of VDisk Database Files Database Log Files Application Files VDisks at primary site MM_DB_Pri MM_DBLog_Pri MM_App_Pri VDisks at secondary site MM_DB_Sec MM_DBLog_Sec MM_App_Sec
Since data consistency is needed across VDisks MM_DB_Pri and MM_DBLog_Pri, a consistency group CG_WIN2K3_MM is created to handle the Metro Mirror relationships for them. While, in this scenario, application files are independent of the database, a stand-alone Metro Mirror relationship is created for VDisk MM_App_Pri. The Metro Mirror setup is illustrated in Figure 8-213 on page 584.
583
Consistency Group
CG_W2K3_MM
MM_DB_Pri
MM Relationship 1
MM_DB_Sec
MM_DBlog_Pri
MM Relationship 2
MM_DBlog_Sec
MM_App_Pri
MM Relationship 3
MM_App_Sec
After we have selected GO for the creation of an cluster partnership as shown in Figure 8-214 the SVC cluster will now show us the options available to us to select a partner cluster as shown in Figure 8-215 on page 586. We have multiple clusters to choose from the cluster candidates list. In our scenario we will use the one called TSO-CLS2. Select ITSO-CLS2 and specify the available bandwidth for the background copy, in this case 50 MBps and then click OK. There are two options available during creation: Inter-Cluster Delay Simulation, which simulates the Global Mirror round trip delay between the two clusters, in milliseconds. The default is 0, and the valid range is 0 to 100 milliseconds. Intra-Cluster Delay Simulation, which simulates the Global Mirror round trip delay in milliseconds. The default is 0, and the valid range is 0 to 100 milliseconds.
585
As shown in Figure 8-216 we can see that our partnership is Partially configured since we have only performed the work on one side of the partnership.
We can see that our newly created Metro Mirror cluster partnership is shown as Partially Configured and to fully configure the Metro Mirror cluster partnership, we must carry out the same steps on ITSO-CLS2 as we did on ITSO-CLS1. For simplicity and brevity, only two most significant windows are shown when the partnership is fully configured. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Metro Mirror cluster partnership and specify the available bandwidth for the background copy, again 50 MBps, and then click OK, as shown in Figure 8-217 on page 587.
586
Figure 8-217 We select the cluster partner for the secondary partner
Now that both sides of the SVC Cluster Partnership are defined, the resulting window shown in Figure 8-218 on page 587 confirms that our Metro Mirror cluster partnership is Fully Configured.
The GUI for ITSO-CLS2 is no longer needed. Close this and use the GUI for cluster ITSO-CLS1 for all further steps.
587
We are presented with the wizard that helps create the Metro Mirror consistency group. The first step in the wizard gives an introduction to the steps involved in the creation of a Metro Mirror consistency group, as shown in Figure 8-220. Click Next to proceed.
As shown in Figure 8-221, specify the name for the consistency group and select your remote cluster, that is already defined in 8.14.3, Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2 on page 585. If we are using this consistency group for internal mirroring, that is within the same cluster, we select intra-cluster consistency group.In our scenario, we select intercluster with our remote cluster ITSO_CLS2 and click Next.
In Figure 8-222 on page 589 we can see the Metro Mirror relationships already created that could be included in our Metro Mirror consistency group. Since we do not have any existing relationships at this point to be included in the Metro Mirror consistency group, we will create a blank group by clicking Next to proceed. 588
SAN Volume Controller V5.1
Verify the setting for the consistency group and click Finish to create the Metro Mirror consistency group, as shown in Figure 8-223 on page 589.
After creation of the consistency group, the GUI returns to the Viewing Metro & Global Mirror Consistency Groups window, as shown in Figure 8-224. This page lists the newly created consistency group and notice that it is empty since no relationships have been added to the group.
589
We are presented with the wizard that will help us create the Metro Mirror relationship. The first step in wizard gives an introduction to the steps involved in the creation of the Metro Mirror relationship, as shown in Figure 8-226. Click Next to proceed.
As shown in Figure 8-227 on page 591, we name the first Metro Mirror relationship MMREL1 and the type of cluster relationship (in this case intercluster) as per the scenario shown in Figure 8-213 on page 584. It also gives us the option to select the type of copy service, which in our case is Metro Mirror.
590
Figure 8-227 Naming the Metro Mirror relationship and selecting the type of cluster relationship
The next step will enable us to select a master VDisk. As this list could potentially be large, the Filtering Master VDisks Candidates window appears, which will enable us to reduce the list of eligible VDisks based on a defined filter. In Figure 8-228, use the filter for * (to list all VDisks) and click Next. Tip: In our scenario, we use MM* as a filter to avoid listing all the VDisks.
As shown in Figure 8-229, we select MM_DB_Pri to be a master VDisk for this relationship, and click Next to proceed.
591
The next step requires us to select an auxiliary VDisk. The Metro Mirror relationship wizard will automatically filter this list, so that only eligible VDisks are shown. Eligible VDisks are those that have the same size as the master VDisk and are not already part of a Metro Mirror relationship. As shown in Figure 8-230, we select MM_DB_Sec as the auxiliary VDisk for this relationship, and click Next to proceed.
As shown in Figure 8-231 on page 592, we select the consistency group that we created and now our relationship will be immediately added to that group. Click Next to proceed.
592
Finally, in Figure 8-232, we verify the attributes for our Metro Mirror relationship and click Finish to create it.
Once the relationship is successfully created, we are returned to the Metro Mirror relationship list. After successful creation of the relationship, the GUI returns to the Viewing Metro & Global Mirror Relationships window, as shown in Figure 8-233. This window lists the newly created relationship and please notice that we have not started the copy process, we have only established the connections between those two VDisks.
By following a similar process, we create the second Metro Mirror relationship MMREL2, which is shown in Figure 8-234.
593
Figure 8-235 Specifying the Metro Mirror relationship name and auxiliary cluster
As shown in Figure 8-236 on page 594, we are prompted for a filter prior to presenting the master VDisk candidates. We select the MM* filter and click Next.
As shown in Figure 8-237 on page 595, we select MM_App_Pri to be the master VDisk of the relationship, and click Next to proceed.
594
As shown in Figure 8-238, we select MM_APP_Sec as the auxiliary VDisk of the relationship, and click Next to proceed.
As shown in Figure 8-239, we will not select a consistency group, since we are creating a stand-alone Metro Mirror relationship.
Note: To add a Metro Mirror relationship to a consistency group, it must be in the same state as the consistency group.
595
As show in Figure 8-240 we cant select an consistency group since we selected our relationship as synchronized and that is not in the same state as the consistency group we created earlier.
Figure 8-240 The consistency group must have the same state as the relationship
Finally, Figure 8-241 shows the actions that will be performed. We click Finish to create this new relationship.
After successful creation, we are returned to the Metro Mirror relationship window. Figure 8-242 now shows all our defined Metro Mirror relationships.
596
In Figure 8-244, we do not need to change Forced start, Mark as clean, or Copy direction parameters, as this is the first time we are invoking this Metro Mirror relationship (and we defined the relationship as being already synchronized). We click OK to start the stand-alone Metro Mirror relationship MMREL3.
Because the Metro Mirror relationship was in the Consistent stopped state and no updates have been made to the primary VDisk, the relationship quickly enters the Consistent synchronized state, as shown in Figure 8-246 on page 598.
597
As shown in Figure 8-247, we click OK to start the copy process. We cannot select the Forced start, Mark as clean, or Copy Direction options, as our consistency group is currently in the Inconsistent stopped state.
As shown in Figure 8-248, we are returned to the Metro Mirror consistency group list and the consistency group CG_W2K3_MM has changed to the Inconsistent copying state.
598
Since the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all the relationships in the consistency group. Upon completion of the background copy for all the relationships in the consistency group, it enters the Consistent synchronized state.
Figure 8-249 Viewing background copy progress for Metro Mirror relationships
Note: Setting up SNMP traps for the SVC enables automatic notification when the Metro Mirror consistency group or relationships change state.
599
As shown in Figure 8-251, we check the Enable write access... option and click OK to stop the Metro Mirror relationship.
Figure 8-251 Enable access to the secondary VDisk while stopping relationship
As shown in Figure 8-252, the Metro Mirror relationship transits to the Idling state when stopped while enabling access to the secondary VDisk.
600
As shown in Figure 8-254, we click OK without specifying Enable write access... to the secondary VDisks.
Figure 8-254 Stopping consistency group without enabling access to secondary VDisks
As shown in Figure 8-255, the consistency group enters the Consistent stopped state, when stopped without enabling access to the secondary.
Afterwards, if we want to enable write access (write I/O) to the secondary VDisks, we can reissue the Stop Copy Process and this time specify that we want to enable write access to the secondary VDisks. In Figure 8-256, we select the Metro Mirror relationship and select Stop Copy Process from drop-down menu and click Go.
601
As shown in Figure 8-257, we check the Enable write access... check box and click OK.
When applying the Enable write access... option, the consistency group transits to the Idling state, as shown in Figure 8-258.
Figure 8-258 Viewing Metro Mirror consistency group in the Idling state
602
Figure 8-259 Starting a stand-alone Metro Mirror relationship in the Idling state
As shown in Figure 8-260, we check the Force option, since write I/O has been performed while in the Idling state, and we select the copy direction by defining the master VDisk as the primary, and click OK.
The Metro Mirror relationship enters the Consistent copying and when background copy is complete, the relationship transits to the Consistent synchronized state, as shown in Figure 8-261.
603
compromised. In this situation, we must check the Force option to start the copy process, otherwise the command will fail. As shown in Figure 8-262, we select the Metro Mirror consistency group and Start Copy Process from the drop-down menu and click Go.
Figure 8-262 Starting the copy process for the consistency group
As shown in Figure 8-263, we check the Force option and set the copy direction by selecting the primary as the master.
Figure 8-263 Specifying the options while starting the copy process in the consistency group
When the background copy completes, the Metro Mirror consistency group enters the Consistent synchronized state shown in Figure 8-264.
604
Figure 8-265 Selecting the consistency group for which the copy direction is to be changed
In Figure 8-266, we see that the currently primary VDisks are the master. So, to change the copy direction for the Metro Mirror consistency group, we specify the auxiliary VDisks to become the primary, and click OK.
Figure 8-266 Selecting primary VDisk, as auxiliary, to switch the copy direction
The copy direction is now switched and we are returned to the Metro Mirror consistency group list, where we see that the copy direction has been switched, as shown in Figure 8-267.
605
Figure 8-267 Viewing Metro Mirror consistency group after changing the copy direction
In Figure 8-268, we show the new copy direction for individual relationships within that consistency group.
Figure 8-268 Viewing Metro Mirror relationship after changing the copy direction
606
In Figure 8-270, we see that the current primary VDisk is the master, so to change the copy direction for the stand-alone Metro Mirror relationship, we specify the auxiliary VDisk to be the primary, and click OK.
The copy direction is now switched and we are returned to the Metro Mirror relationship list, where we see that the copy direction has been switched and the auxiliary VDisk has become the primary, as shown in Figure 8-271.
607
Since data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a consistency group to handle Global Mirror relationships for them. While in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. The Global Mirror setup is illustrated in Figure 8-272 on page 608.
Primary Site
SVC Cluster ITSO-CLS1
Secondary Site
SVC Cluster ITSO-CLS2
Consistency Group
CG_W2K3_MM
GM_DB_Pri
GM_Relationship 1
GM_DB_Sec
GM_DBLog_Pri
GM_Relationship 2
GM_DBLog_Sec
GM_App_Pri
GM_Relationship 3
GM_App_Sec
608
609
Figure 8-274 shows the cluster partnership defined for this cluster. Since there is no existing partnership, nothing can be listed. It also gives a warning stating that for any type of copy relationship between VDisks across two different clusters, the partnership must exist between them. Notice that we have already another partnership running. Select GO to continue creating your partnership.
In Figure 8-275 the available SVC cluster candidates are listed, which in our case is ITSO-CLS4. We select ITSO-CLS4 and specify the available bandwidth for the background copy; in this case, we select 10 MBps and then click OK.
Figure 8-275 Selecting SVC cluster partner and specifying bandwidth for background copy
In the resulting window, shown in Figure 8-276, the newly created Global Mirror cluster partnership is shown as Partially Configured.
610
To fully configure the Global Mirror cluster partnership, we must carry out the same steps on ITSO-CLS4 as we did on ITSO-CLS1. For simplicity, in the following figures, only the last two windows are shown. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Global Mirror cluster partnership and specify the available bandwidth for background copy, again 10 MBps, and then click OK, as shown in Figure 8-277.
Figure 8-277 Selecting SVC cluster partner and specifying bandwidth for background copy
Now that both sides of the SVC Cluster Partnership are defined, the window shown in Figure 8-278 confirms that our Global Mirror cluster partnership is Fully Configured.
611
Note: Link tolerance, intercluster delay simulation, and intracluster delay simulation are introduced with the use of the Global Mirror feature.
612
primary VDisk to a secondary VDisk, is delayed. A value from 0 to 100 milliseconds in 1 millisecond increments can be set and a value of zero disables this feature. To check the current settings for the delay simulation, refer to Changing link tolerance and delay simulation values for Global Mirror on page 613.
Changing link tolerance and delay simulation values for Global Mirror
Here, we show the modification of the delay simulations and the Global Mirror link tolerance values. We also show the changed values for the Global Mirror link tolerance and delay simulation parameters. Launching the SVC GUI for ITSO-CLS1, we select the Global Mirror Cluster Partnership option to view and to modify the parameters, as shown in Figure 8-279 and Figure 8-280 on page 613, respectively.
Figure 8-279 View and modify Global Mirror link tolerance and delay simulation parameters
Figure 8-280 Set Global Mirror link tolerance and delay simulations parameters
After performing the steps, the GUI returns to the Global Mirror Partnership window and lists the new parameter settings, as shown in Figure 8-281.
613
To start the creation process, we select Create Consistency Group from the drop-down menu and click Go, as shown in Figure 8-283. We see that in our list we already have one Metro Mirror consistency group created between ITSO-CLS1 and ITSO-CLS2 but we are now creating a new Global Mirror Consistency group.
614
We are presented with a wizard that helps us create the Global Mirror consistency group. The first step in this wizard gives an introduction to the steps involved in the creation of the Global Mirror consistency group, as shown in Figure 8-284. Click Next to proceed.
As shown in Figure 8-285, we specify the consistency group name and whether it is to be used for intercluster or intracluster relationships. In our scenario, we select Create an inter-cluster consistency group and the we need to select our cluster partner. In Figure 8-285we can se that we can select between ITSO-CLS2 and ITSO-CLS4 and since ITSO-CLS4 is our Global Mirror partner we select it and click Next.
Figure 8-286 would show any existing Global Mirror relationships that could be included in the Global Mirror consistency group. As we do not have any existing Global Mirror relationships at
615
this time, we will create an empty group by clicking Next to proceed as shown in Figure 8-286.
Verify the settings for the consistency group and click Finish to create the Global Mirror Consistency Group, as shown in Figure 8-287.
Figure 8-287 Verifying the settings for the Global Mirror consistency group
When the Global Mirror consistency group is created, we are returned to the Viewing Global Mirror Consistency Groups window. It shows our newly created Global Mirror consistency group, as shown in Figure 8-288.
616
We are presented with a wizard that helps us create Global Mirror relationships. The first step in the wizard gives an introduction to the steps involved in the creation of the Global Mirror relationship, as shown in Figure 8-290. Click Next to proceed.
As shown in Figure 8-291, we name our first Global Mirror relationship GMREL1, click Global Mirror Relationship, and select the relationship for the cluster. In this case, it is an intercluster relationship towards ITSO-CLS4, as shown in Figure 8-272 on page 608.
617
Figure 8-291 Naming the Global Mirror relationship and selecting the type of the cluster relationship
The next step will enable us to select a master VDisk. As this list could potentially be large, the Filtering Master VDisks Candidates window appears, which will enable us to reduce the list of eligible VDisks based on a defined filter. In Figure 8-292, use the filter for GM* (use * to list all VDisks) and click Next.
As shown in Figure 8-293, we select GM_DB_Pri to be the master VDisk of the relationship, and click Next to proceed.
618
The next step will require us to select an auxiliary VDisk. The Global Mirror relationship wizard will automatically filter this list so that only eligible VDisks are shown. Eligible VDisks are those that have the same size as the master VDisk and are not already part of a Global Mirror relationship. As shown in Figure 8-294, we select GM_DB_Sec as the auxiliary VDisk for this relationship, and click Next to proceed.
As shown in Figure 8-295, select the relationship to be part of the consistency group that we have created and click Next to proceed.
Note: It is not mandatory to make the relationship part of a consistency group at this stage. It can be also be done after the creation of the relationship at a later stage. The relationship can be added to the consistency group by modifying that relationship. Finally, in Figure 8-296, we verify the Global Mirror Relationship attributes and click Finish to create it.
619
After successful creation of the relationship, the GUI returns to the Viewing Global Mirror Relationships window, as shown in Figure 8-297. This window will list the newly created relationship. Using the same process, the second Global Mirror relationship, GMREL2, is also created. Both relationships are shown in Figure 8-297.
Next, we are presented with the wizard that shows the steps involved in the process of creating a Global Mirror relationship, as shown in Figure 8-299. Click Next to proceed.
620
In Figure 8-300, we name the Global Mirror relationship GMREL3, specify that it is an intercluster relationship, and click Next.
Figure 8-300 Naming the Global Mirror relationship and selecting the type of cluster relationship
As shown in Figure 8-301 on page 621, we are prompted for a filter prior to presenting the master VDisk candidates. We use * to list all candidates and click Next.
621
As shown in Figure 8-302, we select GM_App_Pri to be the master VDisk for the relationship, and click Next to proceed.
As shown in Figure 8-303, we select GM_App_Sec as the auxiliary VDisk for the relationship, and click Next to proceed.
As shown in Figure 8-304, we did not select a consistency group, as we are creating a stand-alone Global Mirror relationship.
622
We also specify that the master and auxiliary VDisk are already synchronized; for the purpose of this example, we can assume that they are pristine. This is shown in Figure 8-305 on page 623.
Figure 8-305 Selecting the synchronized option for Global Mirror relationship
Note: To add a Global Mirror relationship to a consistency group, it must be in the same state as the consistency group. Even if we intend to make the Global Mirror relationship GMREL3 part of the consistency group CG_W2K3_GM, we are not offered the option, as shown in Figure 8-305. This is because the state of the relationship GMREL3 is Consistent Stopped, because we selected the synchronized option. The state of the consistency group CG_W2K3_GM is currently Inconsistent Stopped. Finally, Figure 8-306 shows the actions that will be performed. We click Finish to create this new relationship.
623
After successful creation, we are returned to the Viewing Global Mirror Relationship window. Figure 8-307 now shows all our defined Global Mirror relationships.
624
In Figure 8-309, we do not need to change the parameters Forced start, Mark as clean, or Copy Direction, as this is the first time we are invoking this Global Mirror relationship (and we already defined the relationship as being synchronized in Figure 8-305 on page 623). We click OK to start the stand-alone Global Mirror relationship GMREL3.
Since the Global Mirror relationship was in the Consistent Stopped state and no updates have been made on the primary VDisk, the relationship quickly enters the Consistent Synchronized state, as shown in Figure 8-310.
625
Figure 8-311 Selecting Global Mirror consistency group and starting the copy process
As shown in Figure 8-312, we click OK to start the copy process. We cannot select the options Forced start, Mark as clean, or Copy Direction, as this is the first time we are invoking this Global Mirror relationship.
As shown in Figure 8-313, we are returned to the Viewing Global Mirror Consistency Groups window and the consistency group CG_W2K3_GM has changed to the Inconsistent copying state. Since the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all relationships in the consistency group. Upon completion of the background copy for all relationships in the consistency group, it enters the Consistent Synchronized state.
626
Figure 8-314 Monitoring background copy process for Global Mirror relationships
Figure 8-315 Monitoring background copy process for Global Mirror relationships
Note: Setting up SNMP traps for the SVC enables automatic notification when Global Mirror consistency groups or relationships change state.
627
As shown in Figure 8-317, we check the Enable write access... option and click OK to stop the Global Mirror relationship.
Figure 8-317 Enable access to the secondary VDisk while stopping the relationship
As shown in Figure 8-318, the Global Mirror relationship transits to the Idling state when stopped, while enabling write access to the secondary VDisk.
628
As shown in Figure 8-320, we click OK without specifying the Enable write access... option to the secondary VDisk.
Figure 8-320 Stopping the consistency group without enabling access to the secondary VDisk
The consistency group enters the Consistent stopped state when stopped. Afterwards if we want to enable access (write I/O) to the secondary VDisks, we can reissue the Stop Copy Process, specifying access to be enabled to the secondary VDisks. In Figure 8-321, we select the Global Mirror relationship and Stop Copy Process from the drop-down menu and click Go.
As shown in Figure 8-322, we check the Enable write access... check box and click OK. .
When applying the Enable write access... option, the consistency group transits to the Idling state, as shown in Figure 8-323.
629
Figure 8-323 Viewing the Global Mirror consistency group after write access to the secondary VDisk
Figure 8-324 Starting stand-alone Global Mirror relationship in the Idling state
As shown in Figure 8-325, we check the Force option, since write I/O has been performed while in the Idling state, and we select the copy direction by defining the master VDisk as the primary, and click OK.
630
The Global Mirror relationship enters the Consistent copying state. When the background copy is complete, the relationship transits to the Consistent synchronized state, as shown in Figure 8-326.
Figure 8-327 Starting the copy process for Global Mirror consistency group
As shown in Figure 8-328, we check the Force option and set the copy direction by selecting the auxiliary as the master.
631
Figure 8-328 Restarting the copy process for the consistency group
When the background copy completes, the Global Mirror consistency group enters the Consistent synchronized state, as shown in Figure 8-329.
Also shown in Figure 8-330 are the individual relationships within that consistency group.
632
Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transits from primary to secondary, because all I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Global Mirror relationship.
Figure 8-331 Selecting the relationship for which the copy direction is to be changed
In Figure 8-332, we see that the current primary VDisk is the master, so to change the copy direction for the stand-alone Global Mirror relationship, we specify the auxiliary VDisk to be the primary, and click OK.
Figure 8-332 Selecting the primary VDisk as auxiliary to switch the copy direction
The copy direction is now switched and we are returned to the Viewing Global Mirror Relationship window, where we see that the copy direction has been switched, as shown in Figure 8-333.
Figure 8-333 Viewing Global Mirror relationship after changing the copy direction
633
Figure 8-334 Selecting the consistency group for which the copy direction is to be changed
In Figure 8-335, we see that currently the primary VDisks are also the master. So, to change the copy direction for the Global Mirror consistency group, we specify the auxiliary VDisks to become the primary, and click OK.
Figure 8-335 Selecting the primary VDisk as auxiliary to switch the copy direction
The copy direction is now switched and we are returned to the Viewing Global Mirror Consistency Group window, where we see that the copy direction has been switched. Figure 8-336 shows that the auxiliary is now the primary.
634
Figure 8-336 Viewing Global Mirror consistency groups after changing the copy direction
Figure 8-337 shows the new copy direction for individual relationships within that consistency group.
Figure 8-337 Viewing Global Mirror Relationships, after changing copy direction for Consistency Group.
As everything has been completed to our expectations, we are now finished with Global Mirror.
635
you will see that new software is available. Use the link that is provided there to download the new software and get more information about it. Important: To use this feature, the SSPC/Master Console must be able to access the Internet. If the SSPC cannot access the Internet because of restrictions such as a local firewall, you will see the message The update server cannot be reached at this time. Use the Web link provided in the message for the latest software information.
C:\Program Files\IBM\SDDDSM>datapath query adapter Active Adapters :2 Adpt# 0 1 Name Scsi Port2 Bus0 Scsi Port3 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 167 137 Errors 0 0 Paths 4 4 Active 4 4
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 37 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 29 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================
Chapter 8. SVC operations using the GUI
637
Path# 0 1 2 3
Adapter/Hard Disk Port2 Bus0/Disk2 Part0 Port2 Bus0/Disk2 Part0 Port3 Bus0/Disk2 Part0 Port3 Bus0/Disk2 Part0
Errors 0 0 0 0
You can check the I/O paths by using datapath query commands, as shown in Example 8-1 on page 637. You do not need to check for hosts that have no active I/O operations to the SANs during the software upgrade. Tip: See the Subsystem Device Driver User's Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540 for more information about datapath query commands. It is well worth double checking that your UPS power configuration is also set up correctly (even if your cluster is running without problems). Specifically: Ensure that your UPSs are all getting their power from an external source, and that they are not daisy chained. In other words, make sure that each UPS is not supplying power to another nodes UPS. Ensure that the power cable, and the serial cable coming from the back of each node, goes back to the same UPS. If the cables are crossed and are going back to different UPSs, then during the upgrade, as one node is shut down, another node might also be mistakenly shut down.
638
This utility is intended to supplement rather than duplicate the existing tests carried out by the SVC upgrade procedure (e.g. checking for unfixed errors in the error log). The upgrade test utility includes command line parameters.
Prerequisites
This utility can only be installed on clusters running SVC V4.1.0.0 or later.
Installation Instructions
To use the upgrade test utility, follow these steps: Download the latest version of the upgrade test utility (IBM2145_INSTALL_svcupgradetest_V.R) using the download link http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 The utility package can be installed using the standard SVC Console (GUI) or command-line (CLI) software upgrade procedures that are used to install any new software onto the cluster. An example CLI command to install the package, once uploaded to the cluster, is svcservicetask applysoftware -file IBM2145_INSTALL_svcupgradetest_X.XX Run the upgrade test utility by logging onto the SVC command line interface and running 'svcupgradetest -v <V.R.M.F>'. Where V.R.M.F is the version number of the SVC release being installed. For Example, if upgrading to SVC V5.1.0.0, the command would be 'svcupgradetest -v 5.1.0.0' The output from the command will either state that there have been no problems found, or will direct you to details about any known issues which have been discovered on this cluster. Example 8-2 shows the command to test an upgrade.
Example 8-2 Run an upgrade test
IBM_2145:ITSO-CLS2:admin>svcupgradetest svcupgradetest version 4.11. Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade.
639
Copying files, please wait... Copying files, please wait... Dumping error log... Creating snap package... Snap data collected in /dumps/snap.100047.080617.002334.tgz
Note: You can ignore the error message No such file or directory. Select Software Maintenance List Dumps Software Dumps, download the dump that was created in Example 8-3 on page 639, and store it in a safe place with the SVC Config that you created previously (see Figure 8-339 and Figure 8-340).
640
4. From the SVC Welcome screen, click the Service and Maintenance option and then the Upgrade Software link. 5. In the Upgrade Software window shown in Figure 8-341, you can either upload a new software upgrade file or list the upgrade files. Click the Upload button to upload the latest SVC cluster code.
6. In the Software Upgrade (file upload) window (Figure 8-342 on page 642), type or browse to the directory on your management workstation (for example, master console) where you stored the latest code level and click Upload.
641
7. The File Upload window (Figure 8-343) is displayed if the file is uploaded. Click Continue.
8. The Select Upgrade File window (Figure 8-344) lists the available software packages. Make sure the radio button next to the package you want to apply is selected. Click the Apply button.
9. In the Confirm Upgrade File window (Figure 8-345 on page 643), click the Confirm button
642
10.After this confirmation, the SVC will check if there are any outstanding errors. If there are no errors, click Continue, as shown in Figure 8-346, to proceed to the next upgrade step. Otherwise, the Run Maintenance button is displayed, which is used to check the errors. For more information about how to use the maintenance procedures, see 8.17.6, Running maintenance procedures on page 645.
11.The Check Node Status window shows the in-use nodes with their current status displayed, as shown in Figure 8-347. Click Continue to proceed.
12.The Start Upgrade window is displayed. Click the Start Software Upgrade button to start the software upgrade, as shown in Figure 8-348 on page 644.
643
The upgrade will start by upgrading one node in each I/O group. 13.The Software Upgrade Status window (Figure 8-349) opens. Click the Check Upgrade Status button periodically. This process might take a while to complete. If the software is completely upgraded, you should get a software completed message and the code level of the cluster and nodes will show the newly applied software level.
14.During the upgrade process, you can only issue informational commands. All task commands such as the creation of a VDisk (as shown in Figure 8-350) are denied. This applies to both the GUI and the CLI. All tasks, such as creation, modifying, mapping, and deleting, are denied.
644
15.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted in turn. Although unlikely, if the concurrent code load (CCL) fails, for example, if one node fails to accept the new code level, then the update on that one node will be backed out, and the node will revert back to the original code level. From 4.1.0 onwards, the update will simply wait for user intervention. For example, if there are two nodes (A and B) in an I/O group, and node A has been upgraded successfully, and then node B then suffers a hardware failure, the upgrade will end with an I/O group that has a single node at the higher code level. If the hardware failure is repaired on node B, the CCL will then complete the code upgrade process.
I
Tip: Be patient! After the software update is applied, the first SVC node in a cluster will update and install the new SVC code version shortly afterwards. If there is more than one I/O group (up to four I/O groups are possible) in an SVC cluster, the second node of the second I/O group will load the new SVC code and restart with a 10 minute delay to the first node. A 30 minute delay between the update of the first node and the second node in an I/O group ensures that all paths, from a multipathing point of view, are available again. An SVC cluster update with one I/O group takes approximately one hour. 16.If you run into an error, go to the Analyze Error Log window. Search for Software Install completed. Select the radio button Sort by date with the newest first and then click Perform. This should list the software near the top. For more information about how to work with the Analyze Error Log window, see 8.17.10, Analyzing the error log on page 655. You might also find it worthwhile to capture information for IBM support to help you diagnose what went wrong. You have now completed the tasks required to upgrade the SVC software. Click the X icon in the upper right corner of the display area to close the Upgrade Software window. Do not close the browser by mistake.
3. This generates a new error log file in the /dumps/elogs/ directory (Figure 8-352 on page 646). We can see also the list of the errors as showed in Figure 8-352 on page 646.
Chapter 8. SVC operations using the GUI
645
4. Click the error number in the Error Code column in Figure 8-352. This gives you the explanation for this error, as shown in Figure 8-353.
5. To perform problem determination, click Continue. It will now display the details for the error and may give you some options to diagnose/repair the problem. In this case, it asks for you to check an external configuration and then press Continue (Figure 8-354 on page 647).
646
6. The SVC maintenance procedure has completed and the error is fixed shown in Figure 8-355.
7. The discovery reported no new errors, so the entry in the error log is now marked as fixed (as shown in Figure 8-356). Click OK.
647
2. To add the IP address of your SNMP Manager, (optional) port, and community string to use (Figure 8-358). Select Add Server and click GO. Note: Depending on what IP protocol addressing is configured, it will display options for IPV4, IPV6, or both.
3. The next window now displays confirmation that it has updated the settings, as shown in Figure 8-359 on page 649.
648
4. The next window now displays the current status, as shown in Figure 8-360.
5. You can now click the X icon in the upper right corner of the Set SNMP Error Notification frame to close this window.
649
650
The Syslog messages can be sent in Compact Message Format or Full message Format. Example 8-4 Shows a compact format syslog message.
Example 8-4 Compact syslog message example
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 Example 8-5 on page 651 shows a full format syslog message.
Example 8-5 Full Format Syslog message example.
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2 #NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0 (build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234 #AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000#Additional Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000
651
Figure 8-367 on page 653 shows how to configure the SMTP server in the SVC cluster. 652
Figure 8-369 on page 653 shows how to define the support e-mail where SVC notifications will be sent.
Figure 8-369 e-mail notification user Chapter 8. SVC operations using the GUI
653
Figure 8-372 on page 654 shows how to send a test e-mail to all users.
654
655
a. Select the appropriate radio buttons and click the Process button to display the log for analysis. The Analysis Options and Display Options radio button boxes allow you to filter the results of your log inquiry to reduce the output. b. You can display the whole log, or you can filter the log so that only errors, events, or unfixed errors are displayed. You can also sort the results by selecting the appropriate display options. For example, you can sort the errors by error priority (lowest number = most serious error) or by date. If you sort by date, you can specify whether the newest or oldest error displays at the top of the table. You can also specify the number of entries you want to display on each page of the table. (Figure 8-375 on page 657), shows an example of the error log s listed.
656
c. Click an underlined sequence number; this gives you the detailed log of this error (Figure 8-376).
657
d. You can optionally display detailed sense code data by pressing the Sense Expert button shown in Figure 8-377 on page 659. Press Return to go back to the detailed error Analysis window.
658
e. If the log entry is an error, you have the option of marking the error as fixed. This does not run through any other checks/processes, so we recommend that you do this as a maintenance procedures task (see 8.17.6, Running maintenance procedures on page 645). f. Click the Clear Log button at the bottom of the Error Log Analysis window (see Figure 8-374)to clear the log. If the error log contains unfixed errors, a warning message is displayed when you click Clear Log. 3. You can now click the X icon in the upper right corner of the Analyze Error Log window.
659
2. Now you can choose from Capacity Licensing or Physical Disk Licensing. Figure 8-379 shows the Physical Disk Licensing Settings panel.
Figure 8-380 on page 661 shows the Capacity Licensing Settings panel.
660
3. In the License Settings window (Figure 8-381), consult your license before you make changes in this window. If you purchased additional features (for example, FlashCopy or Global Mirror) or if you increased the capacity of your license, make the appropriate changes. Then click the Update License Settings button.
4. You now see a license confirmation window, as shown in Figure 8-382 on page 662. Review this window and ensure that you are in compliance. If you are in compliance, click I Agree to make the requested changes take effect.
661
5. You return to the Update License Settings review window (Figure 8-383), where your changes should be reflected.
6. You can now click the X icon in the upper right corner of the License Settings window.
662
3. You can now click the X icon in the upper right corner of the View License Settings Log window.
663
generated in Example 8-3 on page 639. Click any of the available links (the underlined text in the table under the List Dumps heading) to go to another window that displays the available dumps. To see the dumps on the other node, you must click Check other nodes. Note: By default, the dump and log information that is displayed is available from the configuration node. In addition to these files, each node in the SVC cluster keeps a local software dump file. Occasionally, other dumps are stored on them. Click the Check Other Nodes button at the bottom of the List Dumps window (Figure 8-386) to see which dumps or logs exist on other nodes in your cluster.
3. Figure 8-387 shows the list of dumps from the partner node. You can see a list of the dumps by clicking one of the Dump Types.
4. To copy a file from this partner node to the config node, click the dump type and then click the file you want to copy, as shown in Figure 8-388 on page 665.
664
5. You will see a confirmation window that the dumps are being retrieved. You can either Continue working with the other node or Cancel back to the original node (Figure 8-389).
6. After all the necessary files are copied to the SVC config node, click Cancel to finish the copy operation, and Cancel again to return to the SVC config node. Now, for example, if you click the Error Logs link, you should see information similar to that shown in Figure 8-390 on page 666.
665
7. From this window, you can perform either of the following tasks: Click any of the available log file links (indicated by the underlined text) to display the log in complete detail, Delete one or all of the dump or log files. To delete all, click the Delete All button. To delete some, select the radio button or buttons to the right of the file and click the Delete button. 8. You can now click the X icon in the upper right corner of the List Dumps window.
666
If, for any reason, you want to set your own quorum disks (for example, if you have installed additional back-end storage and you want to move one or two quorum disks onto this newly installed back-end storage subsystem), complete the following tasks: From the Welcome screen select Work with Managed Disks, then select Quorum Disks and that will take you to the window shown in Figure 8-391.
We can now select our quorum disks and identify which is to be the active one. To change the active quorum, as shown in Figure 8-392 on page 667, we start by selecting the MDisk we want to contain our quorum disk.
We confirm that we want to change the active quorum disk as shown in Figure 8-393.
After we have changed the active quorum we can see that our previous active quorum disk is in the state of initializing as shown in Figure 8-394.
667
Shortly afterwards we have a successful change as shown in Figure 8-395 on page 668.
Quorum disks are only created if at least one MDisk is in managed mode (that is, it was formatted by the SVC with extents in it). Otherwise, a 1330 cluster error message is displayed in the SVC front window. You can correct it only when you place MDisks in managed mode.
668
Backing up the cluster configuration enables you to restore your cluster configuration in the event that it is lost. But only the data that describes the cluster configuration is backed up. In order to back up your application data, you need to use the appropriate backup methods. To begin the restore process, consult IBM Support to determine the cause as to why you cannot access your original configuration data. The prerequisites for having a successful backup are as follows: All nodes in the cluster must be online. No object name can begin with an underscore (_). Do not run any independent operations that could change the cluster configuration while the backup command runs. Do not make any changes to the fabric or cluster between backup and restore. If changes are made, back up your configuration again or you might not be able to restore it later. Note: We recommend that you make a backup of the SVC configuration data after each major change in the environment, such as defining or changing a VDisks, VDisk-to-host mappings, and so on. The svc.config.backup.xml file is stored in the /tmp folder on the configuration node and must be copied to an external and secure place for backup purposes. Important: We strongly recommend that you change the default names of all objects to non-default names. For objects with a default name, a warning is produced and the object is restored with its original name and _r appended to it.
3. After the configuration backup is successfully done, you see messages similar to the ones shown in Figure 8-397 on page 670. Make sure you that you read, understand, act upon, and document the warning messages, since they can influence the restore procedure.
669
4. You can now click the X icon in the upper right corner of the Backing up a Cluster Configuration window. Note: To avoid getting the CMMVC messages that are shown in Figure 8-397, you need to replace all the default names, for example, mdisk1, vdisk1, and so on.
670
After you have saved your configuration file, it will be presented to you as an .xml file. Figure 8-399 shows an SVC backup configuration file example.
671
3. Click Delete to confirm the deletion of the configuration backup data. See Figure 8-401.
8.18.5 Fabrics
From the Fabrics Link in the Service and Maintenance Panel it is possible to get a View of the Fabrics from the SVCs point of view. This could be useful in order to debug a SAN problem. Figure 8-402 on page 673 shows a Viewing Fabrics example.
672
673
674
Chapter 9.
Data migration
In this chapter, we explain how to migrate from a conventional storage infrastructure to a virtualized storage infrastructure applying the SVC. We also explain how the SVC can be phased out of a virtualized storage infrastructure, for example, after a trial period or just because you wanted to use the SVC as a data mover because it was offering you the best chance to fit your requirements in term of data migration performance, or because it gave the best SLA to your application during the data migration. Moreover we will show how it is possible to migrate from a fully allocated VDisk to a Space Efficient VDisk using the VDisk Mirroring feature and Space Efficient Volume together. We will show also an example of how to use intracluster Metro Mirror in order to migrate data.
675
676
The syntax of the CLI command is: svctask migrateexts -source src_mdisk_id | src_mdisk_name -exts num_extents -target target_mdisk_id | target_mdisk_name [-threads number_of_threads] -vdisk vdisk_id | vdisk_name The parameters for the CLI command are: -vdisk: Specifies the VDisk ID or name to which the extents belong. -source: Specifies the source Managed Disk ID or name on which the extents currently reside. -exts: Specifies the number of extents to migrate. -target: Specifies the target MDisk ID or name onto which the extents are to be migrated. -threads: Optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.
677
The syntax of the CLI command is: svctask migratevdisk -mdiskgrp mdisk_group_id | mdisk_group_name [-threads number_of_threads -copy_id] -vdisk vdisk_id | vdisk_name The parameters for the CLI command are: -vdisk: Specifies the VDisk ID or name to migrate into another MDG. -mdiskgrp: Specifies the target MDG ID or name. -threads: An optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4. -copy_id: Required if the specified VDisk has more than one copy. The syntax of the CLI command is: svctask migratetoimage -copy_id -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id | name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads] The parameters for the CLI command are: -vdisk: Specifies the name or ID of the source VDisk to be migrated. -copy_id: Required if the specified VDisk has more than one copy. -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk must be unmanaged and large enough to contain the data of the disk being migrated.) -mdiskgrp: Specifies the MDG into which the MDisk must be placed once the migration has completed. -threads: Optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.
678
In Figure 9-1 we illustrate how the VDisk V3 is being migrated from MDG1 to MDG2. Important: In order for the migration to be legal, the source and destination MDisk must have the same extent size.
I/O G r o u p 0
S V C 1 IO -G rp 0 Node 1 S V C 1 IO -G r p 0 Node 2
V1
V2
V4
V3
V3
V6
V5
MDG 1
MDG 2
MDG 3
M1
M2
M3
M4
M5
M6
M7
R A ID C o n tr o lle r A
R A ID C o n tr o lle r B
Extents are allocated to the migrating VDisk, from the set of MDisks in the target MDG, using the extent allocation algorithm. The process can be prioritized by specifying the number of threads to use while migrating; using only one thread will put the least background load on the system. If a large number of extents are being migrated, you can specify the number of threads that will be used in parallel (from 1 to 4). The offline rules apply to both MDGs, therefore, referring back to Figure 9-1, if any of the MDisks M4, M5, M6, or M7 go offline, then VDisk V3 goes offline. If MDisk M4 goes offline, then V3 and V5 goes offline, but V1, V2, V4, and V6 remain online. If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is migrated while the MDisk access mode transitions from image to managed. For the duration of the move, the VDisk is listed as being a member of the original MDG. For the purposes of configuration, the VDisk moves to the new MDG instantaneously at the end of the migration.
679
680
corrupted by the loss of the cached data. During the flush, the VDisk operates in cache write-through mode. Attention: Do not move a VDisk to an offline I/O group under any circumstance. You must ensure that the I/O group is online before you move the VDisks to avoid any data loss. You must quiesce host I/O before the migration for two reasons: If there is significant data in cache that takes a long time to destage, then the command line will time out. SDD vpaths associated with the VDisk are deleted before the VDisk move takes place in order to avoid data corruption. So, data corruption could occur if I/O is still ongoing at a particular LUN ID when it is re-used for another VDisk. When migrating a VDisk between I/O Groups, you do not have the ability to specify the preferred node. The preferred node is assigned by the SVC. The syntax of the CLI command is: svctask chvdisk [-name -new_name_arg][-iogrp -io_group_id | - io_group_name [-force]] [-node -node_id | - node_name [-rate -throttle_rate]] [-unitmb -udid -vdisk_udid] [-warning -disk_size | -disk_size_percentage] [-autoexpand -on | -off [ -copy -id]] [-primary -copy_id][-syncrate -percentage_arg] [vdisk_name | vdisk_id [-unit [-b | -kb | -mb | -gb | -tb | -pb]]] For detailed information about the chvdisk command parameters, refer to the SVC command line interface help typing: svctask chvdisk -h Or refer to the Command Line Interface Users Guide, SG26-7903-05 The chvdisk command modifies a single property of a virtual disk (VDisk). To change the VDisk name and modify the I/O group, for example, you must issue the command twice. A VDisk that is a member of a FlashCopy or Remote Copy relationship cannot be moved to another I/O Group, and this cannot be overridden by using the -force flag.
681
Important: After a migration has been started, there is no way for you to stop the migration. The migration runs to completion unless it is stopped or suspended by an error condition, or if the VDisk being migrated is deleted.
9.3.1 Parallelism
Some of the activities described below can be carried out in parallel.
Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of migration activities: Migrate multiple extents Migrate between MDGs Migrate off deleted MDisk Migrate to image mode These high-level migration tasks operate by scheduling single extent migrations, as follows: Up to 256 single extent migrations can run concurrently. This number is made up of single extent migrates, which result from the operations listed above. The Migrate Multiple Extents and Migrate Between MDGs commands support a flag that allows you to specify the number of threads to use between 1 and 4. This parameter affects the number of extents that will be concurrently migrated for that migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that operation, subject to other resource constraints.
Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not take into account whether the MDisk is the source or the destination. If more than four single extent migrates are scheduled for a particular MDisk, further migrations are queued pending the completion of one of the currently running migrations.
682
Chunks
Regardless of the extent size for the MDG, data is migrated in units of 16 MB. In this description, this unit is referred to as a chunk. The algorithm used to migrate an extent is as follows: 1. Pause (this means to queue all new I/O requests in the virtualization layer in SVC and wait for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC cluster. The I/O to other extents is unaffected. 2. Unpause I/O on all of the source MDisk extent apart from writes to the specific chunk that is being migrated. Writes to the extent are mirrored to the source and destination. 3. On the node performing the migrate, for each 256 K section of the chunk: Synchronously read 256 K from the source. Synchronously write 256 K to the target. 4. Once the entire chunk has been copied to the destination, repeat the process for the next chunk within the extent. 5. Once the entire extent has been migrated, pause all I/O to the extent being migrated, checkpoint the extent move to on-disk metadata, redirect all further reads to the destination, and stop mirroring writes (writes only to destination). 6. If the checkpoint fails, then the I/O is unpaused.
683
During the migration, the extent can be divided into three regions, as shown in Figure 9-2. Region B is the chunk that is being copied. Writes to region B are queued (paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the destination because this data has already been copied. Writes to Region A are written to both the source and the destination extent in order to maintain the integrity of the source extent. Reads and writes to Region C are directed to the source because this region has yet to be migrated. The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During this time, all writes to the chunk from higher layers in the software stack (such as cache destages) are held back. If the back-end storage is operating with significant latency, then it is possible that this operation might take some time (minutes) to complete. This can have an adverse affect on the overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is still active after one minute, then the migration is paused for 30 seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the chunk is resumed. This algorithm is repeated as many times as necessary to complete the migration of the chunk.
16 MB
Figure 9-2 Migrating an extent
Not to scale
SVC guarantees read stability during data migrations even if the data migration is stopped by a node reset or a cluster shutdown. This is possible because SVC disallows writes on all nodes to the area being copied, and upon a failure the extent migration is restarted from the beginning. At the conclusion of the operation we will have: Extents migrated in 16MB chunks, one chunk at a time. Chunks either copied, in progress, or not copied. When the extent is finished its new location is saved. Figure 9-3 on page 685 shows data migration and write operation relationship.
684
MDisk modes
There are three different MDisk modes: 1. Unmanaged MDisk: An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged MDisk is not associated with any VDisks and has no metadata stored on it. The SVC will not write to an MDisk that is in unmanaged mode except when it attempts to change the mode of the MDisk to one of the other modes. 2. Image Mode MDisk: Image Mode provides a direct block-for-block translation from the MDisk to the VDisk with no virtualization. Image Mode VDisks have a minimum size of one block (512 bytes) and always occupy at least one extent. An Image Mode MDisk is associated with exactly one VDisk. 3. Managed Mode MDisk: Managed Mode Mdisks contribute extents to the pool of extents available in the MDG. Zero or more Managed Mode VDisks might use these extents.
685
This occurs when an image mode MDisk is created on an MDisk that was previously unmanaged. It also occurs when an MDisk is used as the target for a Migrate to Image Mode. 4. Image mode to unmanaged mode. There are two distinct ways in which this can happen: When an image mode VDisk is deleted. The MDisk that supported the VDisk becomes unmanaged. When an image mode VDisk is migrated in image mode to another MDisk, the MDisk that is being migrated from remains in image mode until all data has been moved off it. It then transitions to unmanaged mode. 5. Image mode to managed mode. This occurs when the image mode VDisk that is using the MDisk is migrated into managed mode. 6. Managed mode to image mode is not possible. There is no operation that will take an MDisk directly from managed mode to image mode. This can be achieved by performing operations that convert the MDisk to unmanaged mode and then to image mode.
add to group
Not in group
remove from group
Managed mode
complete migrate
Image mode
Image mode VDisks have the special property that the last extent in the VDisk can be a partial extent. Managed mode disks do not have this property. To perform any type of migration activity on an image mode VDisk, the image mode disk must first be converted into a managed mode disk. If the image mode disk has a partial last extent, then this last extent in the image mode VDisk must be the first to be migrated. This migration is handled as a special case. After this special migration operation has occurred, the VDisk becomes a managed mode VDisk and is treated in the same way as any other managed mode VDisk. If the image mode 686
SAN Volume Controller V5.1
disk does not have a partial last extent, then no special processing is performed, the image mode VDisk is simply changed into a managed mode VDisk, and is treated in the same way as any other managed mode VDisk. After data is migrated off a partial extent, there is no way to migrate data back onto the partial extent.
687
Migrate your VDisk to an image mode Vdisk. You might perform this activity if you were removing the SVC from your SAN environment after a trial period. This step is detailed in 9.5.5, Migrating the VDisk from managed mode to image mode on page 702. Move an image mode Vdisk to another image mode VDisk. This procedure can be used to migrate data from one storage subsystem to the other. This step is detailed in 9.6.6, Migrate the VDisks to image mode VDisks on page 727. These activities can be used individually, or together, enabling you to migrate your servers LUNs from one storage subsystem to another storage subsystem using SVC as your migration tool. The only downtime required for these activities will be the time it takes you to remask/remap the LUNs between the storage subsystems and your SVC.
688
Figure 9-7 shows the properties of one of the DS4700 disks using the Subsystem Device Driver DSM (SDDDSM). The disk appears as an IBM 1814 Fast Multipath Device.
9.5.2 SVC added between the host system and the DS4700
Figure 9-8 shows the new environment with the SVC and a second storage subsystem attached to the SAN. The second storage subsystem would not be required to migrate to 689
SVC, but in the following examples, we will show that it is possible to move data across storage subsystems without any host downtime.
To add the SVC between the host system and the DS4700 storage subsystem, perform the following steps: 1. Check that you have installed supported device drivers on your host system. 2. Check that your SAN environment fulfills the supported zoning configurations. 3. Shut down the host. 4. Change the LUN masking in the DS4700. Mask the LUNs to the SVC and remove the masking for the host. Figure 9-9 on page 691 shows the two LUNs with UN id. 12 and 13 remapped to SVC ITSO-CLS3.
690
5. Log on to your SVC Console, open Work with Managed Disks and Managed Disks, select Discover Managed Disks in the drop-down field and click Go (Figure 9-10).
Figure 9-11 on page 692 shows the two LUNs discovered as Mdisk12 and Mdisk13
691
6. Now we create one new empty MDG for each MDisk we want to use to create an image VDisk later. Open Work with Managed Disks and Managed Disks Group, select Create an Mdisk Group in the drop-down field and click Go. Figure 9-12 shows MDisk Group creation.
7. Click Next . 8. Insert the MDG name, MDG_img_1, do not select any MDisk as shown in Figure 9-13 on page 693, then click Next.
692
9. Choose the extent size you want to use as shown in Figure 9-14 then click Next. Keep in mind that the extent size you are choosing must be the same in the MDG where you will migrate your data later.
10.Now click Finish in order to complete the MDG creation. Figure 9-15 shows the completion screen.
693
11.Now we create new VDisks named W2k8_Log and W2k8_Data using the two new discovered MDisks in the MDisk group MDG0 as follows: 12.Open the Work with Virtual Disks and Virtual Disks views and as shows in Figure 9-16, select Create an Imagemode VDisk from the list and click Go.
13.The Create Image Mode Virtual Disk window (Figure 9-17 on page 694) displayed. Click Next.
14.Type the name that you would like to use for the VDisk and select the attributes, in our case, the name is W2k8_Log. Click Next (Figure 9-18).
694
Figure 9-18 Set the attributes for the image mode Virtual Disk
15.Select the MDisk to create the image mode virtual disk and click Next (Figure 9-19).
Figure 9-19 Select the MDisk to use for your image disk
16.Select an I/O group, the Preferred Node, and the MDisk group you just created before. Optionally, you can let this system choose these settings (Figure 9-20). Click Next.
695
Note: If you have more than two nodes in the cluster, select the I/O group of the nodes to evenly share the load. 17.Review the summary and click Finish to create the Image Mode VDisk. Figure 17 on page 696 shows the image VDisk summary and attributes.
18.Repeat steps 6 through 17 for each LUN you want to migrate to the SVC. 19.In the Viewing Virtual Disk view, we see the two newly created VDisks, as shown in Figure 9-22. In our example, they are named W2k8_log and W2k8_data.
696
20.In the MDisk view (Figure 9-23), we see the two new MDisks are now shown as Image Mode Disk. In our example, they are named mdisk12 and mdisk13.
21.Map the VDisks again to the Windows 2008 host system: 22.Open the Work with Virtual Disks and Virtual Disks view, mark the VDisks and select Map Virtual Disk to a Host, and click Go (Figure 9-24).
23.Choose the host and enter the SCSI LUN IDs. Click OK (Figure 9-25 on page 698).
697
698
3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM to open the SDDDSM command-line utility (Figure 9-28).
699
4. Enter the command datapath query device to check if all paths are available, as planned in your SAN environment (Example 9-1).
Example 9-1 datapath query device
DEV#: 0 DEVICE NAME: Disk0 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000007 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 180 0 1 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 2 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 145 0 3 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 25 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 164 0 2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 136 0 C:\Program Files\IBM\SDDDSM>
700
2. Select the MDG to which to migrate the disk and the number of used threads, as shown in Figure 9-30. Click OK.
Note: If you migrate the VDisks to another MDisk group, the extent size of the source and target managed disk group has to be equal.
701
3. The Migration Progress view will appear and enable you to monitor the migration progress (Figure 9-31).
4. Click the percentage to show more detailed information about this VDisk. During the migration process, the VDisks are still in the old MDisk group. During the migration your server is still accessing the data. After the migration is complete, the VDisk will be in the new MDisk group MDG_DS45 and becomes a striped Vdisk. Figure 9-32 shows the migrated Vdisk in the new MDG.
702
4. Select the source VDisk copy and click Next (Figure 9-35).
703
5. Select a target MDisk by clicking the radio button for it (Figure 9-36). Click Next.
6. Select an MDG by clicking the radio button for it (Figure 9-37). Click Next.
Note: If you migrate the VDisks to another MDisk group, the extent size of the source and target managed disk group has to be equal.
704
7. Select the number of threads (1 to 4). The higher the number, the higher the priority (Figure 9-38). Click Next.
9. The progress window will appear. 10.Repeat these steps for every VDisk you want to migrate to an Image Mode VDisk. 11.Free the data from the SVC by using the procedure in 9.5.7, Free the data from the SVC on page 709.
705
To migrate the image mode VDisk to another image mode VDisk, perform the following steps: 1. Check the VDisk to migrate and select Migrate to an image mode VDisk from the drop-down menu. Click Go.
2. The Introduction window appears Figure 9-42 on page 707. Click Next.
706
3. Select the VDisk source copy and click Next Figure 9-43.
4. Select a target MDisk by clicking the radio button for it Figure 9-44 on page 708. Click Next.
707
5. Select a target Managed Disk Group by clicking the radio button for it. Click Next.
6. Select the number of threads (1 to 4) Figure 9-46. The higher the number, the higher the priority. Click Next.
7. Verify the migration attributes Figure 9-47 on page 709 and click Finish.
708
9. Repeat these steps for all image mode VDisks you want to migrate. 10.If you want to free the data from the SVC, use the procedure in 9.5.7, Free the data from the SVC on page 709.
709
If the command succeeds on an image mode VDisk, then the underlying back-end storage controller will be consistent with the data that a host could previously have read from the image mode VDisk, that is, all fast write data will have been flushed to the underlying LUN. Deleting an image mode VDisk causes the MDisk associated with the VDisk to be ejected from the MDG. The mode of the MDisk will be returned to Unmanaged. Note: This only applies to image mode VDisks. If you delete a normal VDisk, all data will also be deleted. As shown in Figure on page 699, the SAN disks currently reside on the SVC 2145 device. Check that you have installed supported device drivers on your host system. To switch back to the storage subsystem, perform the following steps: 1. Shut down your host system. 2. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN masking and add the host to the masking. 3. Open the Virtual Disk to Host mappings view in the SVC Console, mark your host, select Delete a Mapping, and click Go (Figure 9-49).
5. The VDisk is removed from the SVC. 6. Repeat steps 3 and 4 for every disk you want to free from the SVC. 7. Power on your host system.
9.5.8 Put the disks online in Windows 2008 that have been freed from SVC
1. Using your DS4500 Storage Manager interface, now remap the two LUNs that were MDisks back to your W2K8 server.
710
2. Open your Computer Management window, Figure 9-51 shows the LUNs are now back to an IBM1814 type.
3. Open your Disk Management window, you will see that the disks have appeared. You may need to reactivate your disk using your right click option on each disk. .
711
712
SAN
Green Zone
Figure 9-53 shows our Linux server connected to our SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem: The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise Linux V5.1) and this LUN is used to boot the system directly from the storage subsystem. The operating system identifies it as /dev/mapper/VolGroup00-LogVol00. Note: To successfully boot a host off of the SAN, the LUN needs to have been assigned as SCSI LUN ID 0. This LUN is seen by Linux as our /dev/sda disk. We have also mapped a second disk (SCSI ID 1) to the host. It is 5 GB in size, and is mounted in the folder / data on disk /dev/dm-2 Example 9-2 shows our directly attached disks to the Linux hosts.
Example 9-2 Directly attached disks
[root@Palau data]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 10093752 /dev/sda1 101086 tmpfs 1033496 /dev/dm-2 5160576
Used Available Use% Mounted on 1971344 12054 0 158160 7601400 83813 1033496 4740272 21% 13% 0% 4% / /boot /dev/shm /data
713
[root@Palau data]# Our Linux server represents a typical SAN environment with a host directly using LUNs created on a SAN storage subsystem, as shown in Figure 9-53 on page 713: The Linux servers HBA cards are zoned so that they are in the Green zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, using LUN masking, are directly available to our Linux server.
714
SAN
By Pinocchio 12-09-2005
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Data -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 6 Palau_SANB online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0
715
0 0.00MB
0 0.00MB
0 0
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> If you do not know the WWN of your Linux server, you can look at which WWNs are currently configured on your storage subsystem for this host. Figure 9-55 shows our configured ports on an IBM DS4700 storage subsystem.
716
After verifying that the SVC can see our host (linux2), we create the host entry and assign the WWN to this entry. These commands can be seen in Example 9-5.
Example 9-5 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B054CAA:210000E08B89C1CD Host, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 4 state inactive WWPN 210000E08B054CAA node_logged_in_count 4 state inactive IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS1:admin> The storage subsystem can be renamed to something more meaningful (if we had many storage subsystems connected to our SAN fabric, then renaming them makes it considerably easier to identify them) with the svctask chcontroller -name command.
717
718
Before we move the LUNs to the SVC, we have to configure the host multipath configuration for the SVC. To do this, add the following entry to your multipath.conf file, as shown in Example 9-7, and add the content of Example 9-8 to the file.
Example 9-7 Edit the multipath.conf file
[root@Palau ~]# vi /etc/multipath.conf [root@Palau ~]# service multipathd stop Stopping multipathd daemon: [root@Palau ~]# service multipathd start Starting multipathd daemon: [root@Palau ~]#
Example 9-8 Data to add to file
[ [
OK OK
] ]
# SVC device { vendor "IBM" product "2145CF8" path_grouping_policy group_by_serial } We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as VDisks.
719
3. Using Storage Manager (our storage subsystem management tool), we can unmap/unmask the disks from the Linux server and remap/remask the disks to the SVC. Note: Even though we are using Boot from SAN, you can also map the boot disk with any LUN number to the SVC. It does not have to be 0. This is only important afterwards when we configure the mapping in the SVC to the host. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available MDisk number (starting from 0). Example 9-9 shows the commands we used to discover our MDisks and verify that we have the correct ones.
Example 9-9 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 mdisk26 online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 mdisk27 online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number you took earlier (in Figure 9-56 and Figure 9-57 on page 718). 5. Once we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk related tasks (Example 9-10).
Example 9-10 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 md_palauS online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
720
6. We create our image mode VDisks with the svctask mkvdisk command and the -vtype image option (Example 9-11). This command will virtualize the disks in the exact same layout as though they were not virtualized.
Example 9-11 Create the image mode VDisks
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_SANB -iogrp 0 -vtype image -mdisk md_palauS -name palau_SANB Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Data -iogrp 0 -vtype image -mdisk md_palauD -name palau_Data Virtual Disk, id [30], successfully create IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count 29 palau_SANB 0 io_grp0 online 4 Palau_SANB 12.0GB image 60050768018301BF280000000000002B 0 1 30 palau_Data 0 io_grp0 online 4 Palau_Data 5.0GB image 60050768018301BF280000000000002C 0 1 7. Map the new image mode VDisks to the host (Example 9-12). Attention: Make sure that you map the boot VDisk with SCSI ID 0 to your host. The host must be able to identify the boot volume during the boot process.
Example 9-12 Map the VDisks to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANB Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 0 Palau 0 29 palau_SANB 210000E08B89C1CD 60050768018301BF280000000000002B 0 Palau 1 30 palau_Data 210000E08B89C1CD 60050768018301BF280000000000002C
Chapter 9. Data migration
721
IBM_2145:ITSO-CLS1:admin> Note: While the application is in a quiescent state, you could choose to FlashCopy the new image VDisks onto other VDisks. You do not need to wait until the FlashCopy has completed before starting your application. 8. Power on your host server and enter your FC HBA adapter BIOS before booting the OS, and make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Press Ctrl + Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0. e. Exit the menu and save your changes. 9. Boot up your Linux operating system. If you only moved the application LUN to the SVC and left your Linux server running, then you would need to follow these steps to see the new VDisk: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, then you can issue commands to the kernel to rescan the SCSI bus to see the new VDisks (these details are beyond the scope of this book). b. Check your syslog and verify that the kernel found the new VDisks. On Red Hat Enterprise Linux, the syslog is stored in /var/log/messages. c. If your application and data is on an LVM volume, rediscover the volume group, then run vgchange -a y VOLUME_GROUP to activate the volume group. 10.Mount your file systems with the mount /MOUNT_POINT command (Example 9-13). The df output shows us that all disks are available again.
Example 9-13 Mount data disk
[root@Palau data]# mount /dev/dm-2 /data [root@Palau data]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938056 7634688 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau data]# 11.You are now ready to start your application.
Created and allocated three new LUNs to the SVC. Discovered them as MDisks. Renamed these LUNs to something more meaningful. Created a new MDisk group. Put all these MDisks into this group. You can see the output of our commands in Example 9-14.
Example 9-14 Create a new MDisk group
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512 MDisk Group, id [8], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 mdisk28 online unmanaged 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 mdisk29 online unmanaged 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 mdisk30 online unmanaged 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000
723
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 25 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 70 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> Once this task has completed, Example 9-16 shows that the VDisks are now spread over three MDisks.
Example 9-16 Migration complete
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD id 8 name MD_palauVD status online mdisk_count 3 vdisk_count 2 capacity 24.0GB extent_size 512 free_capacity 7.0GB virtual_capacity 17.00GB used_capacity 17.00GB real_capacity 17.00GB overallocation 70 warning 0 724
SAN Volume Controller V5.1
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB id 28 29 30 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data id 28 29 30 IBM_2145:ITSO-CLS1:admin> Our migration to striped VDisks on another Storage Subsystem (DS4500) is now complete. The original MDisks (Palau-MDG0 and Palau-MD1) can now be removed from the SVC, and these LUNs removed from the storage subsystem. If these LUNs were the last used LUNs on our DS4700 storage subsystem, then we could remove it from our SAN fabric.
725
SAN
status capacity
mode ctrl_LUN_#
726
0 mdisk0 online 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdisk31 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdisk32 online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
managed 8
Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to something more meaningful, just so that they do not get confused with other MDisks being used by other activities. Also, we create the MDisk groups to hold our new MDisks. This is shown in Example 9-18.
Example 9-18 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 MDisk Group, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 CMMVC5758E Object name already exists. IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 8 MD_palauVD online 3 2 24.0GB 512 7.0GB 17.00GB 17.00GB 17.00GB 70 0 9 MDG_Palauivd online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS1:admin>
Our SVC environment is now ready for the VDisk migration to image mode VDisks.
727
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk mdpalau_ivd1 -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdpalau_ivd1 online image 8 MD_palauVD 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online image 8 MD_palauVD 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 4 migrate_source_vdisk_index 29 migrate_target_mdisk_index 32 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 30 migrate_source_vdisk_index 30 migrate_target_mdisk_index 31 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> During the migration, our Linux server will not be aware that its data is being physically moved between storage subsystems. Once the migration has completed, the image mode VDisks will be ready to be removed from the Linux server, and the real LUNs can be mapped/masked directly to the host using the storage subsystems tool.
728
If we only wanted to move the LUN that holds our application and data files, then we could do that without rebooting the host. The only requirement would be that we unmount the file system, and vary off the volume group to ensure data integrity during the reassignment. Before you start: Moving LUNs to another storage subsystem might need an additional entry in the multipath.conf file. Check with the storage subsystem vendor to see which content you have to add to the file. You might be able to install and modify it ahead of time. As we intend to move both LUNs at the same time, here are the required steps: 1. Confirm that your operating system is configured for the new storage. 2. Shut down the host. If you were just moving the LUNs that contained the application and data, then you could follow this procedure instead: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, then deactivate that volume group with the vgchange -a n VOLUMEGROUP_NAME command. d. If you can, unload your HBA driver using rmmod DRIVER_MODULE. This will remove the SCSI definitions from the kernel (we will reload this module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, these details are not provided here. 3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-20). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which should show that these disks are no longer mapped to the Linux server.
Example 9-20 Remove the VDisks from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau IBM_2145:ITSO-CLS1:admin> 4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make them unmanaged, as seen in Example 9-21 on page 730.
729
Note: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cache data for the VDisk being removed. If there is still uncommitted cached data, then the command will fail with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk. The SVC will automatically de-stage uncommitted cached data two minutes after the last write activity for the VDisk. How much data there is to destage, and how busy the I/O subsystem is, will determine how long this command takes to complete. You can check if the VDisk has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Some modified data might exist in the cache. Some modified data might have existed in the cache, but any such data has been lost.
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 31 mdpalau_ivd1 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the disks from the SVC back to the Linux server. Attention: If one of the disks is used to boot your Linux server, than you need to make sure that it is presented back to the host as SCSI ID 0, so that the FC adapter BIOS finds it during its initialization. 6. Power on your host server and enter your FC HBA Adapter BIOS before booting the OS and make sure that you change the boot configuration so that it points to the SVC. In our example, we have performed the following steps on a QLogic HBA: a. Press Ctrl + Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from the SVC to your storage subsystem LUN with SCSI ID 0. 730
SAN Volume Controller V5.1
e. Exit the menu and save your changes. Important: This is the last step that you can perform and still safely back out everything you have done so far. Up to this point, you can reverse all the actions that you have performed so far to get the server back online without data loss, that is: Remap/remask the LUNs back to the SVC. Run the svctask detectmdisk to rediscover the MDisks. Recreate the VDisks with svctask mkvdisk. Remap the VDisks back to the server with svctask mkvdiskhostmap. Once you start the next step, you might not be able to turn back without the risk of data loss. We are now ready to restart the Linux server. If all the zoning and LUN masking/mapping was done successfully, our Linux server should boot as though nothing has happened. If you only moved the application LUN to the SVC, and left your Linux server running, then you would need to follow these steps to see the new VDisk: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, then you can issue commands to the kernel to rescan the SCSI bus to see the new VDisks (these details are beyond the scope of this book). b. Check your syslog and verify that the kernel found the new VDisks. On Red Hat Enterprise Linux, the syslog is stored in /var/log/messages. c. If your application and data is on an LVM volume, run vgscan to rediscover the volume group, then run the vgchange -a y VOLUME_GROUP to activate the volume group. 7. Mount your file systems with the mount /MOUNT_POINT command (Example 9-22). The df output shows us that all disks are available again.
Example 9-22 File system after migration
[root@Palau ~]# mount /dev/dm-2 /data [root@Palau ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938124 7634620 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau ~]# 8. You should be ready to start your application. And finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then will automatically be removed once the SVC determines that there are no VDisks associated with these MDisks.
731
732
Figure 9-59 shows our ESX server connected to the SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem. Our ESX server represents a typical SAN environment with a host directly using LUNs created on a SAN storage subsystem, as shown in Figure 9-59: The ESX Servers HBA cards are zoned so that they are in the Green zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, and using LUN masking, are directly available to our ESX server.
733
Be very careful connecting the SVC into your storage area network, as it will require you to connect cables to your SAN switches, and alter your switch zone configuration. Doing these activities incorrectly could render your SAN inoperable, so make sure you fully understand the impact of everything you are doing. Connecting the SVC to your SAN fabric will require you to: Assemble your SVC components (nodes, UPS, and master console), cable it correctly, power it on, and verify that it is visible on your storage area network. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (the Black zone in our picture on Example 9-45 on page 756). This zone should just contain all the ports (or WWN) for each of the SVC nodes in your cluster. Our SVC is made up of a two node cluster where each node has four ports. So our Black zone has eight WWNs defined. A storage zone (our Red zone). This zone should also have all the ports/WWN from the SVC node zone as well as the ports/WWN for all the storage subsystems that SVC will virtualize. A host zone (our Blue zone). This zone should contain the ports/WWNs for each host that will access VDisks, together with the ports defined in the SVC node zone. Attention: Do not put your storage subsystems in the host (Blue) zone. This is an unsupported configuration and could lead to data loss! Our environment has been set up as described above and can be seen in Figure 9-60.
734
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512 MDisk Group, id [3], successfully created
Figure 9-61 Obtain your WWN using the VMware Management Console
The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC can see on the SAN fabric that have not yet been allocated to a host. Example 9-24 shows the output of the nodes it found on our SAN fabric. (If the port did not show up, it would indicate that we have a zone configuration problem.) 735
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89B8C0 210000E08B892BCD 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> After verifying that the SVC can see our host, we create the host entry and assign the WWN to this entry. These commands can be seen in Example 9-25.
Example 9-25 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn 210000E08B89B8C0:210000E08B892BCD Host, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n product_id_low product_id_high 0 DS4500 1742-900 1 DS4700 1814 FAStT
If you are also using a DS4000 family storage subsystem, Storage Manager will provide the LUN serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are shown in Figure 9-63 on page 737 and Figure 9-62 on page 737.
We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as VDisks.
737
The virtual machines are located on these LUNs. So, in order to move this LUN under the control of the SVC, we do not need to reboot the whole ESX server, but we have to stop/suspend all VMware guests that are using this LUN.
2. Next, identify all the VMware guests that are using this LUN and shut them down. One way to identify them is to highlight the virtual machine and open the Summary Tab. The datapool used is displayed under Datastore. Figure 9-66 on page 739 shows a Linux virtual machine using the datastore named SLES_Costa_Rica.
738
3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is no guest operating system that is running and using this datastore. 4. Repeat steps 1 to 3 for every datastore you want to migrate. 5. Once the guests are suspended, we use Storage Manager (our storage subsystem management tool) to unmap/unmask the disks from the ESX server and remap/remask the disks to the SVC. 6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named as mdiskN, where N is the next available MDisk number (starting from 0). Example 9-27 shows the commands we used to discover our MDisks and verify that we have the correct ones.
Example 9-27 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 mdisk21 online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 mdisk22 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
739
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number you obtained earlier (in Figure 9-62 and Figure 9-63 on page 737). 7. Once we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk related tasks (Example 9-28).
Example 9-28 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 21 ESX_SLES online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 8. We create our image mode VDisks with the svctask mkvdisk command (Example 9-29). The parameter -vtype image makes sure that it will create image mode VDisks, which means the virtualized disks will have the exact same layout as though they were not virtualized.
Example 9-29 Create the image mode VDisks
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVD Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_SLES -name ESX_SLES_IVD Virtual Disk, id [30], successfully created IBM_2145:ITSO-CLS1:admin> 9. Finally, we can map the new image mode VDisks to the host. Use the same SCSI LUNs ID as on the storage subsystem for the mapping (Example 9-30).
Example 9-30 Map the VDisks to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029
740
10.Now, using the VMware management console, rescan to discover the new VDisk. Open the configuration tab, select Storage Adapters, and click Rescan. During the rescan, you might receive geometry errors, as ESX discovers that the old disk has disappeared. Your VDisk will appear with new vmhba devices. 11.We are now ready to restart the VMware guests again. The VMware LUNs have now been successfully migrated to the SVC.
741
We also need a Green zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. We assume that you have created the necessary zones. In our environment, we have: Created three LUNs on another storage subsystem and mapped it to the SVC. Discovered them as MDisks. Created a new MDisk group. Renamed these LUNs to something more meaningful. Put all these MDisks into this group. You can see the output of our commands in Example 9-31. Example 9-31 Create a new MDisk group IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 mdisk23 online unmanaged 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 mdisk24 online unmanaged 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 mdisk25 online unmanaged 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 742
SAN Volume Controller V5.1
24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 1 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp
743
id name capacity extent_size real_capacity overallocation 3 MDG_Nile_VM 130.0GB 512 130.00GB 100 4 MDG_ESX_VD 165.0GB 512 0.00MB 0 IBM_2145:ITSO-CLS1:admin>
If you compare the svcinfo lsmdiskgrp output after the migration, as shown in Example 9-33, you can see that all the virtual capacity has now been moved from the old MDisk group (MDG_Nile_VM) to the new MDisk group (MDG_ESX_VD). The mdisk_count column shows that the capacity is now spread over three MDisks.
Example 9-33 List MDisk group
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status capacity extent_size free_capacity real_capacity overallocation warning 3 MDG_Nile_VM online 130.0GB 512 130.0GB 0.00MB 0 0 4 MDG_ESX_VD online 165.0GB 512 35.0GB 130.00GB 78 0 IBM_2145:ITSO-CLS1:admin>
Our migration to the SVC is now complete. The original MDisks can now be removed from the SVC, and these LUNs removed from the storage subsystem. If these LUNs were the last used LUNs on our storage subsystem, then we could remove them from our SAN fabric.
744
There are also some other preparatory activities that we can do before we need to shut down the host and reconfigure the LUN masking/mapping. This section covers those activities. In our example, we will move VDisks located on a DS4500 to image mode VDisks located on a DS4700. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment should look similar to ours, as described in Adding a new storage subsystem to SVC on page 741 and Make fabric zone changes on page 741.
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 mdisk26 online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 mdisk27 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 4
Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to something more meaningful, just so that they do not get confused with other MDisks being used by other activities. Also, we create the MDisk groups to hold our new MDisks. This is all shown in Example 9-35.
Example 9-35 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512 MDisk Group, id [5], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning
745
4 MDG_ESX_VD online 3 165.0GB 512 35.0GB 130.00GB 130.00GB 78 0 5 MDG_IVD_ESX online 0 512 0 0.00MB 0.00MB 0 IBM_2145:ITSO-CLS1:admin>
2 130.00GB 0 0.00MB 0 0
Our SVC environment is now ready for the VDisk migration to image mode VDisks.
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 ESX_IVD_SLES online image 5 MDG_IVD_ESX 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online image 5 MDG_IVD_ESX 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
During the migration, our ESX server will not be aware that its data is being physically moved between storage subsystems. We can continue to run and use the virtual machines running on the server. You can check the migration status with the command svcinfo lsmigrate, as shown in Example 9-37.
Example 9-37 The svcinfo lsmigrate command and output
migrate_source_vdisk_index 29 migrate_target_mdisk_index 27 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 12 migrate_source_vdisk_index 30 migrate_target_mdisk_index 26 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> Once the migration has completed, the image mode VDisks will be ready to be removed from the ESX server, and the real LUNs can be mapped/masked directly to the host using the storage subsystems tool.
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id wwpn vdisk_UID 1 Nile 0 30 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 210000E08B892BCD 60050768018301BF2800000000000029 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id mdisk_grp_id mdisk_grp_name capacity FC_name RC_id RC_name copy_count 0 vdisk_A 0 2 MDG_Image 36.0GB 29 ESX_W2k3_IVD 0 4 MDG_ESX_VD 70.0GB striped 60050768018301BF2800000000000029 0 30 ESX_SLES_IVD 0 4 MDG_ESX_VD 60.0GB striped 60050768018301BF280000000000002A 0
IO_group_name status type FC_id vdisk_UID fc_map_count io_grp0 image io_grp0 1 io_grp0 1 online online online
747
IBM_2145:ITSO-CLS1:admin> 2. Shut down/suspend all our guests using the LUNs. You can use the same method used in Move VMware guest LUNs on page 738 to identify the guests using this LUN. 3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-39). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which should show that these VDisks are no longer mapped to the ESX server.
Example 9-39 Remove the VDisks from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD 4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make the MDisks unmanaged, as shown in Example 9-40. Note: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cache data for the VDisk being removed. If there is still uncommitted cached data, then the command will fail with the error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk. The SVC will automatically de-stage uncommitted cached data two minutes after the last write activity for the VDisk. Depending on how much data there is to destage, and how busy the I/O subsystem is will determine how long this command takes to complete. You can check if the VDisk has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Some modified data might exist in the cache. Some modified data might have existed in the cache, but any such data has been lost.
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 ESX_IVD_SLES online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000
748
IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the disks from the SVC back to the ESX server. Remember in Example 9-38 on page 747 that we have noted the SCSI LUNs ID. To map your LUNs on the storage subsystem, use the same SCSI LUN IDs as before in the SVC. Important: This is the last step that you can perform, and still safely back out of everything you have done so far. Up to this point, you can reverse all the actions that you have performed so far to get the server back online without data loss, that is: Remap/remask the LUNs back to the SVC. Run svctask detectmdisk to rediscover the MDisks. Recreate the VDisks with svctask mkvdisk. Remap the VDisks back to the server with svctask mkvdiskhostmap. Once you start the next step, you might not be able to turn back without the risk of data loss. 6. Now, using the VMware management console, rescan to discover the new VDisk. Figure 9-68 shows the view before the rescan. Figure 9-69 on page 750 shows the view after the rescan. Note that the size of the LUN has changed because we have moved to another LUN on another storage subsystem.
749
During the rescan, you might receive geometry errors as ESX discovers that the old disk has disappeared. Your VDisk will appear with a new vmhba address, and VMware will recognize it as our VMWARE-GUESTS disk. 7. We are now ready to restart the VMware guests. 8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then will automatically be removed, once the SVC determines that there are no VDisks associated with these MDisks.
750
availability, performance, and redundancy. This step is covered in 9.8.4, Migrate image mode VDisks to VDisks on page 760. Move your AIX servers LUNs back to image mode VDisks, so that they can be remapped/remasked directly back to the AIX server. This step starts in 9.8.5, Preparing to migrate from the SVC on page 762. These three activities can be used individually or together, enabling you to migrate your AIX servers LUNs from one storage subsystem to another storage subsystem, using the SVC as your migration tool. If you do not use all three activities, it will enable you to introduce or remove the SVC from your environment. The only downtime required for these activities will be the time it takes you to remask/remap the LUNs between the storage subsystems and your SVC. We show our AIX environment in Figure 9-70.
SAN
Green Zone
Figure 9-70 shows our AIX server connected to our SAN infrastructure. It has two LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem. The disk, hdisk3, makes up the LVM group itsoaixvg, and disk hdisk4 makes up the LVM group itsoaixvg1, as shown in Example 9-41 on page 752.
751
#lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1814 DS4700 Disk Array Device 1814 DS4700 Disk Array Device rootvg rootvg rootvg itsoaixvg itsoaixvg1 active active active active active
Our AIX server represents a typical SAN environment with a host directly using LUNs created on a SAN storage subsystem, as shown in Figure 9-70 on page 751. The AIX servers HBA cards are zoned so that they are in the Green (Dotted line) zone, with our storage subsystem. The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem, and using LUN masking, are directly available to our AIX server.
752
Attention: Do not put your storage subsystems in the host (Blue) zone. This is an unsupported configuration and could lead to data loss! Our environment has been set up as described above and can be seen in Figure 9-71.
SAN
753
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 7 aix_imgmdg online 512 0 0.00MB 0 IBM_2145:ITSO-CLS2:admin> 0 0.00MB 0 0.00MB 0 0
#lsdev -Ccadapter|grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter #lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
754
Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 #lscfg -vpl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A67B Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A800 ROS Level and ID............02C03891 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........02000909 Device Specific.(Z4)........FF401050 Device Specific.(Z5)........02C03891 Device Specific.(Z6)........06433891 Device Specific.(Z7)........07433891 Device Specific.(Z8)........20000000C932A800 Device Specific.(Z9)........CS3.82A1 Device Specific.(ZA)........C1D3.82A1 Device Specific.(ZB)........C2D3.82A1 Device Specific.(YL)........U0.1-P2-I5/Q1
PLATFORM SPECIFIC Name: fibre-channel Model: LP9000 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 ## The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC can see on the SAN fabric that have not yet been allocated to a host. Example 9-44 shows the output of the nodes it found in our SAN fabric. (If the port did not show up, it would indicate that we have a zone configuration problem.)
Example 9-44 Add the host to the SVC
755
After verifying that the SVC can see our host (Kanaga), we create the host entry and assign the WWN to this entry. These commands can be seen in Example 9-45.
Example 9-45 Create the host entry
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn 10000000C932A7FB:10000000C932A800 Host, id [5], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga id 5 name Kanaga port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C932A800 node_logged_in_count 2 state inactive WWPN 10000000C932A7FB node_logged_in_count 2 state inactive IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 IBM_2145:ITSO-CLS2:admin> Note: The svctask chcontroller command enables you to change the discovered storage subsystem name in SVC. In complex SANs, it might be a good idea to rename your storage subsystem to something more meaningful.
756
We are now ready to move the ownership of the disks to the SVC, discover them as MDisks and give them back to the host as VDisks.
757
#varyoffvg itsoaixvg #varyoffvg itsoaixvg1 #lsvg rootvg itsoaixvg itsoaixvg1 #lsvg -o rootvg 3. Using Storage Manager (our storage subsystem management tool), we can unmap/unmask the disks from the AIX server and remap/remask the disks to the SVC. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available mdisk number (starting from 0). Example 9-48 shows the commands we used to discover our MDisks and verify that we have the correct ones.
Example 9-48 Discover the new MDisks
status capacity
mode ctrl_LUN_#
24 mdisk24 online unmanaged 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 mdisk25 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000
758
IBM_2145:ITSO-CLS2:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number you discovered earlier (in Figure 9-72 and Figure 9-73 on page 757). 5. Once we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk related tasks (Example 9-49).
Example 9-49 Rename the MDisks
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> 6. We create our image mode VDisks with the svctask mkvdisk command and the option -vtype image (Example 9-50). This command will virtualize the disks in the exact same layout as though they were not virtualized.
Example 9-50 Create the image mode VDisks
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX -name IVD_Kanaga Virtual Disk, id [8], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1 Virtual Disk, id [9], successfully created IBM_2145:ITSO-CLS2:admin> 7. Finally, we can map the new image mode VDisks to the host (Example 9-51).
Example 9-51 Map the VDisks to the host
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS2:admin> Note: While the application is in a quiescent state, you could choose to FlashCopy the new image VDisks onto other VDisks. You do not need to wait until the FlashCopy has completed before starting your application. Now we are ready to perform the following steps to put the image mode VDisks online: 1. Remove the old disk definitions, if you have not done so already. 2. Run cfgmgr -vs to rediscover the available LUNs.
Chapter 9. Data migration
759
3. If your application and data is on an LVM volume, rediscover the volume group, then run the varyonvg VOLUME_GROUP to activate the volume group. 4. Mount your file systems with the mount /MOUNT_POINT command. 5. You should be ready to start your application.
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512 IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 mdisk26 online unmanaged 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 mdisk27 online unmanaged 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 mdisk28 online unmanaged 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28 IBM_2145:ITSO-CLS2:admin> IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID
760
24 Kanaga_AIX online image aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 10 migrate_source_vdisk_index 8 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 9 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin> Once this task has completed, Example 9-54 shows that the VDisks are now spread over three MDisks in the managed disk group aix_vd. The old mdiskgrp is empty now.
Example 9-54 Migration complete
761
name aix_vd status online mdisk_count 3 vdisk_count 2 capacity 18.0GB extent_size 512 free_capacity 5.0GB virtual_capacity 13.00GB used_capacity 13.00GB real_capacity 13.00GB overallocation 72 warning 0 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg id 7 name aix_imgmdg status online mdisk_count 2 vdisk_count 0 capacity 13.0GB extent_size 512 free_capacity 13.0GB virtual_capacity 0.00MB used_capacity 0.00MB real_capacity 0.00MB overallocation 0 warning 0 IBM_2145:ITSO-CLS2:admin> Our migration to the SVC is now complete. The original MDisks can now be removed from the SVC, and these LUNs removed from the storage subsystem. If these LUNs were the last used LUNs on our storage subsystem, then we could remove it from our SAN fabric.
762
If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment should look similar to ours, as shown in Figure 9-74.
SAN
763
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 mdisk29 online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 mdisk30 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to something more meaningful just so that they do not get confused with other MDisks being used by other activities. Also, we create the MDisk groups to hold our new MDisks. This is shown in Example 9-57 on page 764.
Example 9-57 Rename the MDisks
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30 764
SAN Volume Controller V5.1
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512 MDisk Group, id [3], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 3 KANAGA_AIXMIG online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 6 aix_vd online 3 2 18.0GB 512 5.0GB 13.00GB 13.00GB 13.00GB 72 0 7 aix_imgmdg offline 2 0 13.0GB 512 13.0GB 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS2:admin>
Our SVC environment is now ready for the VDisk migration to image mode VDisks.
IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 AIX_MIG online image 3 KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000
765
30 AIX_MIG1 online image KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 9 migrate_target_mdisk_index 30 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 8 migrate_target_mdisk_index 29 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin>
During the migration, our AIX server will not be aware that its data is being physically moved between storage subsystems. Once the migration has completed, the image mode VDisks will be ready to be removed from the AIX server, and the real LUNs can be mapped/masked directly to the host using the storage subsystems tool.
766
3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-59). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which should show that these disks are no longer mapped to the AIX server.
Example 9-59 Remove the VDisks from the host
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga IBM_2145:ITSO-CLS2:admin> 4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make the MDisks unmanaged, as shown in Example 9-60. Note: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cache data for the VDisk being removed. If there is still uncommitted cached data, then the command will fail with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the VDisk. How much data there is to destage and how busy the I/O subsystem is will determine how long this command takes to complete. You can check if the VDisk has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Some modified data might exist in the cache. Some modified data might have existed in the cache, but any such data has been lost.
IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 29 AIX_MIG online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>
767
5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the disks from the SVC back to the AIX server. Important: This is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all the actions that you have performed so far to get the server back online without data loss: Remap/remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the VDisks with svctask mkvdisk. Remap the VDisks back to the server with svctask mkvdiskhostmap. Once you start the next step, you might not be able to turn back without the risk of data loss. We are now ready to access the LUNs from the AIX server. If all the zoning and LUN masking/mapping was done successfully, our AIX server should boot as though nothing happened. 1. Run cfgmgr -S to discover the storage subsystem. 2. Use lsdev -Ccdisk to verify that you discovered your new disk. 3. Remove the references to all the old disks. Example 9-61 shows the removal using SDD and Example 9-62 on page 769 shows the removal using SDDPCM.
Example 9-61 Remove references to old paths using SDD
#lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk5 Defined 1Z-08-02 SAN Volume Controller Device hdisk6 Defined 1Z-08-02 SAN Volume Controller Device hdisk7 Defined 1D-08-02 SAN Volume Controller Device hdisk8 Defined 1D-08-02 SAN Volume Controller Device hdisk10 Defined 1Z-08-02 SAN Volume Controller Device hdisk11 Defined 1Z-08-02 SAN Volume Controller Device hdisk12 Defined 1D-08-02 SAN Volume Controller Device hdisk13 Defined 1D-08-02 SAN Volume Controller Device vpath0 Defined Data Path Optimizer Pseudo Device Driver vpath1 Defined Data Path Optimizer Pseudo Device Driver vpath2 Defined Data Path Optimizer Pseudo Device Driver # for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done hdisk5 deleted hdisk6 deleted hdisk7 deleted hdisk8 deleted hdisk10 deleted hdisk11 deleted hdisk12 deleted hdisk13 deleted #for i in 0 1 2; do rmdev -dl vpath$i -R;done 768
SAN Volume Controller V5.1
deleted deleted deleted -Cc disk Available Available Available Available Available
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1742-900 (900) Disk Array Device 1742-900 (900) Disk Array Device
# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk3 Defined 1D-08-02 MPIO FC 2145 hdisk4 Defined 1D-08-02 MPIO FC 2145 hdisk5 Available 1D-08-02 MPIO FC 2145 # for i in 3 4; do rmdev -dl hdisk$i -R;done hdisk3 deleted hdisk4 deleted # lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk5 Available 1D-08-02 MPIO FC 2145
4. If your application and data is on an LVM volume, rediscover the volume group, then run the varyonvg VOLUME_GROUP to activate the volume group. 5. Mount your file systems with the mount /MOUNT_POINT command. 6. You should be ready to start your application. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then will automatically be removed once the SVC determines that there are no VDisks associated with these MDisks.
769
3. Depending on your operating system, unmount the selected LUNs or shut down the host. 4. Add the SVC between your storage and the host. 5. Mount the LUNs or start the host again. 6. Start the migration. 7. After the migration process is complete, unmount the selected LUNs or shut down the host. 8. Remove the SVC from your SAN. 9. Mount the LUNs or start the host again. 10.The migration is complete. As you can see, very little downtime is required. If you prepare everything correctly, you are able to reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host does not hinder performance while the migration progresses. To use the SVC for storage migrations, only perform the steps described in the following sections: 1. 9.5.2, SVC added between the host system and the DS4700 on page 689 2. 9.5.6, Migrating the VDisk from image mode to image mode on page 705 3. 9.5.7, Free the data from the SVC on page 709
770
Figure 9-75 on page 771 shows the Space-Efficient VDisks zero detect concept.
771
As shown in Figure 9-76 on page 772, in a Space-Efficient VDisk we can find: Used Capacity: Specifies the portion of real capacity that is being used to store data. For non-space-efficient copies, this value is the same as the VDisk capacity. If the VDisk copy is space-efficient, the value increases from zero to the real capacity value as more of the VDisk is written to. Real Capacity: This is the real allocated space in the MDG. In a Space Efficient VDisk, the value can be different from the Total Capacity. Free Data: Specifies the difference between the real capacity and used capacity values. The SVC is continuously trying to keep equal to the real capacity for contingency. If the free data capacity reaches the used capacity and the VDisk has been configured with the -autoexpand option, the SVC will autoexpand the allocated space for this VDisk in order to keep this value equal to the Real Capacity. Grains: This is the smallest unit in into which the allocated space can be divided. Metadata: This is allocated in the real capacity, and keeps track from used capacity, real capacity and free capacity.
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full Virtual Disk, id [2], successfully created
772
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status offline sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize
2. We then add an SEV copy with the VDisk Mirroring option using the addvdiskcopy command and the autoexpand parameter as shown in Example 9-64.
Example 9-64 addvdiskcopy example
IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full Vdisk [2] copy [1] successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full
Chapter 9. Data migration
773
id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync no primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB
774
overallocation 4746 autoexpand on warning 80 grainsize 32 As you can see in Example 9-64 on page 773, now the VD_Full has a copy_id 1 where the used_capacity is 0.41MB, that is equal to the metadata because there are only zeros in the disk. The real_capacity is 323.57MB, and that is equal to the -rsize 2% value specified in the addvdiskcopy command, The free capacity is 323.17MB, that is equal to real capacity less used capacity. If zeros are written in the disk, the SEV is not consuming space. Example 9-65 shows that SEV is not consuming space even when they are in sync.
Example 9-65 SEV display
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2 vdisk_id vdisk_name copy_id progress estimated_completion_time 2 VD_Full 0 100 2 VD_Full 1 100 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id
Chapter 9. Data migration
775
mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync yes primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 3. Now we can split the VDisk Mirror or remove one of the copies keeping the SEV copy as our valid one using the splitvdiskcopy or rmvdiskcopy command. If you need your copy as an SEV clone we suggest that you use splitvdiskcopy as that will generate a new VDisk and you will be able to map to any server you want. If you need your copy because you are migrating from a previous fully allocated VDisk to go to a SEV without any impact to the server operations, we suggest that you use the rmvdiskcopy command. In this case the original VDisk name will be kept and will remain mapped to the same server. Example 9-66 shows the splitvdiskcopy command.
Example 9-66 splitvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full Virtual Disk, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 0 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000D 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV 776
SAN Volume Controller V5.1
IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000D throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 Example 9-67 shows the rmvdiskcopy command.
Example 9-67 rmvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000B 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2 id 2
Chapter 9. Data migration
777
name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 1 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32
778
capacity changes following the used capacity changing during the Metro Mirror synchronization. 1. We will use a fully allocated VDisk named VD_Full and we will create a Metro Mirror relationship with an SEV named VD_SEV. Example 9-68 shows the two VDisks and the rcrelationship creation.
Example 9-68 VDisks and rcrelationship
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000F 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id 2 RC_name vdisk_UID 60050768018401BF2800000000000010 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name
Chapter 9. Data migration
779
fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000F throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 307.20MB free_capacity 306.79MB overallocation 5000 autoexpand off warning 1 grainsize 32 IBM_2145:ITSO-CLS2:admin>svctask mkrcrelationship -cluster 0000020061006FCA -master VD_Full -aux VD_SEV -name MM_SEV_rel RC Relationship, id [2], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsrcrelationship MM_SEV_rel
780
id 2 name MM_SEV_rel master_cluster_id 0000020061006FCA master_cluster_name ITSO-CLS2 master_vdisk_id 2 master_vdisk_name VD_Full aux_cluster_id 0000020061006FCA aux_cluster_name ITSO-CLS2 aux_vdisk_id 7 aux_vdisk_name VD_SEV primary master consistency_group_id consistency_group_name state inconsistent_stopped bg_copy_priority 50 progress 0 freeze_time status online sync copy_type metro 2. Now we start the rcrelationship and we will observe how the space allocation in the target VDisk will change until it reaches the total of used capacity. Example 9-69 shows how to start the rcrelationship and shows the space allocation changing.
Example 9-69 rcrelationship and space allocation
IBM_2145:ITSO-CLS2:admin>svctask startrcrelationship MM_SEV_rel IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no . . type striped mdisk_id mdisk_name fast_write_state not_empty used_capacity 3.64GB real_capacity 3.95GB free_capacity 312.89MB overallocation 380 autoexpand on warning 80 grainsize 32
781
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.02GB real_capacity 15.03GB free_capacity 11.97MB overallocation 99 autoexpand on warning 80 grainsize 32 3. In conclusion, it is possible to use Metro Mirror to migrate data, and we could use an SEV as the target VDisk. However, this doesnt make any sense because at the end of the initial data synchronization, the SEV will allocate as much space as the source (in our case VD_Full). If you want to use Metro Mirror to migrate your data, we suggest to use even for Source an Target VDisk a Fully allocated VDisk.
782
6423axa.fm
Appendix A.
Scripting
In this appendix, we present a high-level overview of how to automate different tasks by creating scripts using the SVC command-line interface (CLI).
783
6423axa.fm
Scripting structure
When creating scripts to automate tasks on the SVC, use the structure illustrated in Figure A-1.
Perform logging
Performing logging
When using the CLI, not all commands provide a usable response to determine the status of the invoked command. Therefore, we recommend that you always create checks that can be logged for monitoring and troubleshooting purposes.
784
6423axa.fm
Appendix A. Scripting
785
6423axa.fm
To use this predefined PuTTY session, the syntax is: plink SVC1:cluster1 If a predefined PuTTY session is not used, the syntax is: plink admin@9.43.36.117 -i "C:\DirectoryPath\KeyName.PPK"
786
6423axa.fm
-------------------------------------VDiskScript.bat--------------------------plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3 plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=%2' >> E:\SVC_Jobs\VDiskScript.log ------------------------------------------------------------------------------Figure A-3 VDiskScript.bat
Using the script, we now create a VDisk with the following parameters: VDisk size (in GB): 20 (%1) VDisk name: Host1_F_Drive (%2) Managed Disk Group (MDG): 1 (%3) This is illustrated in Example A-1.
Example: A-1 Executing the script to create the VDisk
E:\SVC_Jobs>VDiskScript 4 Host1_E_Drive 1 E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size 4 -unit gb -name Host1_E_Drive -mdiskgrp 1 Virtual Disk, id [32], successfully created E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svcinfo lsvdisk -filtervalue 'name=Host 1_E_Drive' 1>>E:\SVC_Jobs\VDiskScript.log From the output of the log, as shown in Example A-2, we verify that the VDisk is created as intended.
Example: A-2 Logfile output from VDiskScript.bat
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count 32 Host1_E_Drive 0 io_grp0 online 1 MDG_DS47 4.0GB striped 60050768018301BF280000000000002E 0 1
Appendix A. Scripting
787
6423axa.fm
SVC tree
Here is another example of using scripting to talk to the SVC. This script displays a tree-like structure for the SVC, as shown in Example A-3. The script has been written in Perl, and should work without modification using Perl on UNIX systems (such as AIX or Linux), Perl for Windows, or Perl in a Windows Cygwin environment.
Example: A-3 SVC Tree script output
$ ./svctree.pl 10.0.1.119 admin /cygdrive/c/Keys/icat.ssh + ITSO-CLS2 (10.0.1.119) + CONTROLLERS + DS4500 (0) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + DS4700 (1) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + MDISK GROUPS + MDG_0_DS45 (ID: 0 CAP: 144.0GB FREE: 120.0GB) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + aix_imgmdg (ID: 7 CAP: 13.0GB FREE: 3.0GB) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + iogrp0 (0) + NODES + Node2 (5) + Node1 (2) + HOSTS + W2k8 (0) + Senegal (1) + VSS_FREE (2) + msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped STAT: online) + msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped STAT: online) + VSS_RESERVED (3) + Kanaga (5) + A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many STAT: online) + VDISKS + MDG_SE_VDisk3 (ID: 0 CAP: 10.2GB TYPE: many) + mdisk2 (ID: 10 CAP: 36.0GB MODE: managed CONT: DS4500) + mdisk_3 (ID: 12 CAP: 36.0GB MODE: managed CONT: DS4500) + A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed CONT: DS4700) + Kanaga_AIX1 (ID: 24 CAP: 8.0GB MODE: managed CONT: DS4700) 788
SAN Volume Controller V5.1
6423axa.fm
+ msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped) + mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: + mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: + msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped) + mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: + mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: iogrp1 (1) + NODES + HOSTS + VDISKS iogrp2 (2) + NODES + HOSTS + VDISKS iogrp3 (3) + NODES + HOSTS + VDISKS recovery_io_grp (4) + NODES + HOSTS + VDISKS recovery_io_grp (4) + NODES + HOSTS + itsosvc1 (2200642269468) + VDISKS
#!/usr/bin/perl $SSHCLIENT = ssh; # (plink or ssh) $HOST = $ARGV[0]; $USER = ($ARGV[1] ? $ARGV[1] : admin); $PRIVATEKEY = ($ARGV[2] ? $ARGV[2] : /path/toprivatekey); $DEBUG = 0; die(sprintf(Please call script with cluster IP address. The syntax is: \n%s ipaddress loginname privatekey\n,$0)) if (! $HOST); sub TalkToSVC() { my $COMMAND = shift; my $NODELIM = shift; my $ARGUMENT = shift; my @info; if ($SSHCLIENT eq plink || $SSHCLIENT eq ssh) { $SSH = sprintf(%s -i %s %s@%s ,$SSHCLIENT,$PRIVATEKEY,$USER,$HOST); } else { die (ERROR: Unknown SSHCLIENT [$SSHCLIENT]\n);
Appendix A. Scripting
789
6423axa.fm
} if ($NODELIM) { $CMD = $SSH svcinfo $COMMAND $ARGUMENT\n; } else { $CMD = $SSH svcinfo $COMMAND -delim : $ARGUMENT\n; } print Running $CMD if ($DEBUG); open SVC,$CMD|; while (<SVC>) { print Got [$_]\n if ($DEBUG); chomp; push(@info,$_); } close SVC; return @info; } sub DelimToHash() { my $COMMAND = shift; my $MULTILINE = shift; my $NODELIM = shift; my $ARGUMENT = shift; my %hash; @details = &TalkToSVC($COMMAND,$NODELIM,$ARGUMENT); print $COMMAND: Got [,join(|,@details).]\n if ($DEBUG); my $linenum = 0; foreach (@details) { print $linenum, $_ if ($debug); if ($linenum == 0) { @heading = split(:,$_); } else { @line = split(:,$_); $counter = 0; foreach $id (@heading) { printf($COMMAND: ID [%s], value [%s]\n,$id,$line[$counter]) if ($DEBUG); if ($MULTILINE) { $hash{$linenum,$id} = $line[$counter++]; } else { $hash{$id} = $line[$counter++]; } } } $linenum++; }
790
6423axa.fm
return %hash; } sub TreeLine() { my $indent = shift; my $line = shift; my $last = shift; for ($tab=1;$tab<=$indent;$tab++) { print ; } if (! $last) { print + $line\n; } else { print | $line\n; } } sub TreeData() { my $indent = shift; my $printline = shift; *data = shift; *list = shift; *condition = shift; my $item; foreach $item (sort keys %data) { @show = (); ($numitem,$detail) = split($;,$item); next if ($numitem == $lastnumitem); $lastnumitem = $numitem; printf(CONDITION:SRC [%s], DST [%s], DSTVAL [%s]\n,$condition{SRC},$condition{DST},$data{$numitem,$condition{DST}}) if ($DEBUG); next if (($condition{SRC} && $condition{DST}) && ($condition{SRC} != $data{$numitem,$condition{DST}})); foreach (@list) { push(@show,$data{$numitem,$_}) } &TreeLine($indent,sprintf($printline,@show),0); } } # Gather our cluster information. %clusters = &DelimToHash(lscluster,1); %iogrps = &DelimToHash(lsiogrp,1); %nodes = &DelimToHash(lsnode,1); %hosts = &DelimToHash(lshost,1); %vdisks = &DelimToHash(lsvdisk,1); %mdisks = &DelimToHash(lsmdisk,1); %controllers = &DelimToHash(lscontroller,1);
Appendix A. Scripting
791
6423axa.fm
%mdiskgrps = &DelimToHash(lsmdiskgrp,1); # We are now ready to display it. # CLUSTER $indent = 0; foreach $cluster (sort keys %clusters) { ($numcluster,$detail) = split($;,$cluster); next if ($numcluster == $lastnumcluster); $lastnumcluster = $numcluster; next if ("$clusters{$numcluster,'location'}" =~ /remote/); &TreeLine($indent,sprintf(%s (%s),$clusters{$numcluster,name},$clusters{$numcluster,cluster_IP_address}),0 ); # CONTROLLERS &TreeLine($indentiogrp+1,CONTROLLERS,0); $lastnumcontroller = ; foreach $controller (sort keys %controllers) { $indentcontroller = $indent+2; ($numcontroller,$detail) = split($;,$controller); next if ($numcontroller == $lastnumcontroller); $lastnumcontroller = $numcontroller; &TreeLine($indentcontroller, sprintf(%s (%s), $controllers{$numcontroller,controller_name}, $controllers{$numcontroller,id}) ,0); # MDISKS &TreeData($indentcontroller+1, %s (ID: %s CAP: %s MODE: %s), *mdisks, [name,id,capacity,mode], {SRC=>$controllers{$numcontroller,controller_name},DST=>controller_name}); } # MDISKGRPS &TreeLine($indentiogrp+1,MDISK GROUPS,0,[]); $lastnummdiskgrp = ; foreach $mdiskgrp (sort keys %mdiskgrps) { $indentmdiskgrp = $indent+2; ($nummdiskgrp,$detail) = split($;,$mdiskgrp); next if ($nummdiskgrp == $lastnummdiskgrp); $lastnummdiskgrp = $nummdiskgrp; &TreeLine($indentmdiskgrp, sprintf(%s (ID: %s CAP: %s FREE: %s), $mdiskgrps{$nummdiskgrp,name}, $mdiskgrps{$nummdiskgrp,id}, $mdiskgrps{$nummdiskgrp,capacity},
792
6423axa.fm
$mdiskgrps{$nummdiskgrp,free_capacity}) ,0); # MDISKS &TreeData($indentcontroller+1, %s (ID: %s CAP: %s MODE: %s), *mdisks, [name,id,capacity,mode], {SRC=>$mdiskgrps{$nummdiskgrp,id},DST=>mdisk_grp_id}); } # IOGROUP $lastnumiogrp = ; foreach $iogrp (sort keys %iogrps) { $indentiogrp = $indent+1; ($numiogrp,$detail) = split($;,$iogrp); next if ($numiogrp == $lastnumiogrp); $lastnumiogrp = $numiogrp; &TreeLine($indentiogrp,sprintf(%s (%s),$iogrps{$numiogrp,name},$iogrps{$numiogrp,id}),0); $indentiogrp++; # NODES &TreeLine($indentiogrp,NODES,0); &TreeData($indentiogrp+1, %s (%s), *nodes, [name,id], {SRC=>$iogrps{$numiogrp,id},DST=>IO_group_id}); # HOSTS &TreeLine($indentiogrp,HOSTS,0); $lastnumhost = ; %iogrphosts = &DelimToHash(lsiogrphost,1,0,$iogrps{$numiogrp,id}); foreach $host (sort keys %iogrphosts) { my $indenthost = $indentiogrp+1; ($numhost,$detail) = split($;,$host); next if ($numhost == $lastnumhost); $lastnumhost = $numhost; &TreeLine($indenthost, sprintf(%s (%s),$iogrphosts{$numhost,name},$iogrphosts{$numhost,id}), 0); # HOSTVDISKMAP %vdiskhostmap = &DelimToHash(lshostvdiskmap,1,0,$hosts{$numhost,id}); $lastnumvdisk = ; foreach $vdiskhost (sort keys %vdiskhostmap) { ($numvdisk,$detail) = split($;,$vdiskhost);
Appendix A. Scripting
793
6423axa.fm
next if ($numvdisk == $lastnumvdisk); $lastnumvdisk = $numvdisk; next if ($vdisks{$numvdisk,IO_group_id} != $iogrps{$numiogrp,id}); &TreeData($indenthost+1, %s (ID: %s CAP: %s TYPE: %s STAT: %s), *vdisks, [name,id,capacity,type,status], {SRC=>$vdiskhostmap{$numvdisk,vdisk_id},DST=>id}); } } # VDISKS &TreeLine($indentiogrp,VDISKS,0); $lastnumvdisk = ; foreach $vdisk (sort keys %vdisks) { my $indentvdisk = $indentiogrp+1; ($numvdisk,$detail) = split($;,$vdisk); next if ($numvdisk == $lastnumvdisk); $lastnumvdisk = $numvdisk; &TreeLine($indentvdisk, sprintf(%s (ID: %s CAP: %s TYPE: %s), $vdisks{$numvdisk,name}, $vdisks{$numvdisk,id}, $vdisks{$numvdisk,capacity}, $vdisks{$numvdisk,type}), 0) if ($iogrps{$numiogrp,id} == $vdisks{$numvdisk,IO_group_id}); # VDISKMEMBERS if ($iogrps{$numiogrp,id} == $vdisks{$numvdisk,IO_group_id}) { %vdiskmembers = &DelimToHash(lsvdiskmember,1,1,$vdisks{$numvdisk,id}); foreach $vdiskmember (sort keys %vdiskmembers) { &TreeData($indentvdisk+1, %s (ID: %s CAP: %s MODE: %s CONT: %s), *mdisks, [name,id,capacity,mode,controller_name], {SRC=>$vdiskmembers{$vdiskmember},DST=>id}); } } } } }
794
6423axa.fm
Scripting alternatives
For an alternative to scripting, visit the Tivoli Storage Manager for Advanced Copy Services product page: http://www.ibm.com/software/tivoli/products/storage-mgr-advanced-copy-services/ Additionally, IBM provides a suite of scripting tools based on Perl. These can be downloaded from: http://www.alphaworks.ibm.com/tech/svctools
Appendix A. Scripting
795
6423axa.fm
796
6423axb.fm
Appendix B.
Node replacement
In this appendix, we discuss the process to replace nodes. For the latest information about replacing a node, refer to the development page at one of the following sites: IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437 Business Partners (login required): http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD104437 Clients: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437
797
6423axb.fm
6423axb.fm
e. Issue the following command from the CLI for each node_name or node_id to determine the front_panel_id for each node and record the ID. This front_panel_id is physically located on the front of every node (it is not the serial number) and you can use this to determine which physical node equates to the node_name or node_id you plan to replace: svcinfo lsnodevpd node_name or node_id 2. Perform the following steps to record the WWNN of the node that you want to replace: a. Issue the following command from the CLI, where node_name or node_id is the name or ID of the node for which you want to determine the WWNN: svcinfo lsnode -delim : node_name or node_id b. Record the WWNN of the node that you want to replace. 3. Verify that all VDisks, MDisks, and disk controllers are online and none are in a state of Degraded. If there are any in this state, then resolve this issue before going forward or loss of access to data may occur when you perform step 4 below. This is an especially important step if this is the second node in the I/O group to be replaced. a. Issue the following commands from the CLI, where object_id or object_name is the controller ID or controller name that you want to view. Verify that each disk controller shows its status as degraded no. svcinfo lsvdisk -filtervalue status=degraded svcinfo lsmdisk -filtervalue status=degraded svcinfo lscontroller object_id or object_name 4. Issue the following CLI command to shut down the node that will be replaced, where node_name or node_id is the name or ID of the node that you want to delete: svctask stopcluster -node node_name or node_id Attention: Do not power off the node through the front panel in lieu of using the above command. Be careful you do not issue the stopcluster command without the -node node_name or node_id parameters, as the entire cluster will be shut down if you do. Issue the following CLI command to ensure that the node is shut down and the status is offline, where node_name or node_id is the name or ID of the original node. The node status should be offline: svcinfo lsnode node_name or node_id 5. Issue the following CLI command to delete this node from the cluster and I/O group, where node_name or node_id is the name or ID of the node that you want to delete: svctask rmnode node_name or node_id 6. Issue the following CLI command to ensure that the node is no longer a member of the cluster, where node_name or node_id is the name or ID of the original node. The node should not be listed in the command output: svcinfo lsnode node_name or node_id
799
6423axb.fm
7. Perform the following steps to change the WWNN of the node that you just deleted to FFFFF: Attention: Record and mark the Fibre Channel cables with the SVC node port number (1-4) before removing them from the back of the node being replaced. You must reconnect the cables on the new node exactly as they were on the old node. Looking at the back of the node, the Fibre Channel ports on the SVC nodes are numbered 1-4 from left to right and must be reconnected in the same order or the port IDs will change, which could impact hosts access to VDisks or cause problems with adding the new node back into the cluster. The SVC Hardware Installation Guide shows the port numbering of the various node models. Failure to disconnect the fibre cables now will likely cause SAN devices and SAN management software to discover these new WWPNs generated when the WWNN is changed to FFFFF in the following steps. This may cause ghost records to be seen once the node is powered down. These do not necessarily cause a problem, but may require a reboot of a SAN device to clear out the record. In addition, it may cause problems with AIX dynamic tracking functioning correctly, assuming it is enabled, so we highly recommend disconnecting the nodes fibre cables as instructed in step a below before continuing on to any other steps. a. Disconnect the four Fibre Channel cables from this node before powering the node on in the next step. b. Power on this node using the power button on the front panel and wait for it to boot up before going to the next step. c. From the front panel of the node, press the down button until the Node: panel is displayed and then use the right and left navigation buttons to display the Status: panel. d. Press and hold the down button, press and release the select button, and then release the down button. The WWNN of the node is displayed. e. Press and hold the down button, press and release the select button, and then release the down button to enter the WWNN edit mode. The first character of the WWNN is highlighted. f. Press the up or down button to increment or decrement the character that is displayed. Note: The characters wrap F to 0 or 0 to F. g. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step f for each field. At the end of this step, the characters that are displayed must be FFFFF. h. Press the select button to retain the characters that you have updated and return to the WWNN screen. i. Press the select button again to apply the characters as the new WWNN for the node. Note: You must press the select button twice as steps h and i instruct you to do. After step h, it may appear that the WWNN has been changed, but step i actually applies the change. 8. Power off this node using the power button on the front panel and remove the node from the rack if desired. 800
SAN Volume Controller V5.1
6423axb.fm
9. Install the replacement node and its UPS in the rack and connect the node to UPS cables according to the SVC Hardware Installation Guide available at: http://www.ibm.com/storage/support/2145 Note: Do not connect the Fibre Channel cables to the new node during this step. 10.Power on the replacement node from the front panel with the Fibre Channel cables disconnected. Once the node has booted, ensure the node displays Cluster: on the front panel and nothing else. If something other then this is displayed, contact IBM support for assistance before continuing. 11.Record the WWNN of this new node, as you will need it if you plan to redeploy the old nodes being replaced. Perform the following steps to change the WWNN of the replacement node to match the WWNN that you recorded in step 2 on page 799: a. From the front panel of the node, press the down button until the Node: panel is displayed and then use the right and left navigation buttons to display the Status: panel. b. Press and hold the down button, press and release the select button, and then release the down button. The WWNN of the node is displayed. Record this number for use in redeployment of the old nodes. c. Press and hold the down button, press and release the select button, and then release the down button to enter the WWNN edit mode. The first character of the WWNN is highlighted. d. Press the up or down button to increment or decrement the character that is displayed. e. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step d for each field. At the end of this step, the characters that are displayed must be the same as the WWNN you recorded in step 2 on page 799. f. Press the select button to retain the characters that you have updated and return to the WWNN panel. g. Press the select button to apply the characters as the new WWNN for the node. Note: You must press the select button twice as steps f and g instruct you to do. After step f, it may appear that the WWNN has been changed, but step g actually applies the change. h. The node should display Cluster: on the front panel and is now ready to begin the process of adding the node to the cluster. If something other then this is displayed, contact IBM support for assistance before continuing. 12.Connect the Fibre Channel cables to the same port numbers on the new node as they were originally on the old node. See step 7 on page 800. Note: Do not connect the new nodes to different ports at the switch or director, as this will cause port IDs to change, which could impact hosts access to VDisks or cause problems with adding the new node back into the cluster. The new nodes have 4 Gbps HBAs in them, and the temptation is to move them to 4 Gbps switch/director ports at the same time, but this is not recommended while doing the hardware node upgrade. Moving the node cables to faster ports on the switch/director is a separate process that needs to be planned independently of upgrading the nodes in the cluster.
801
6423axb.fm
13.Issue the following CLI command to verify that the last five characters of the WWNN are correct: svcinfo lsnodecandidate Note: If the WWNN does not match the original nodes WWNN exactly as recorded in step 2 on page 799, you must repeat step 11 on page 801. 14.Add the node to the cluster and ensure it is added back to the same I/O group as the original node. Using the following command, where wwnn_arg and iogroup_name or iogroup_id are the items you recorded in steps 1 on page 798 and 2 on page 799. svctask addnode -wwnodename wwnn_arg -iogrp iogroup_name or iogroup_id 15.Verify that all the VDisks for this I/O group are back online and are no longer degraded. If the node replacement process is being done disruptively, such that no I/O is occurring to the I/O group, you still need to wait some period of time (we recommend 30 minutes in this case too) to make sure the new node is back online and available to take over before you do the next node in the I/O group. See step 3 on page 799. Both nodes in the I/O group cache data; however, the cache sizes are asymmetric if the remaining partner node in the I/O group is a SAN Volume Controller 2145-4F2 node. The replacement node is limited by the cache size of the partner node in the I/O group in this case. Therefore, the replacement node does not utilize the full 8 GB cache size until the other 2145-4F2 node in the I/O group is replaced. You do not have to reconfigure the host multipathing device drivers because the replacement node uses the same WWNN and WWPNs as the previous node. The multipathing device drivers should detect the recovery of paths that are available to the replacement node. The host multipathing device drivers take approximately 30 minutes to recover the paths. Therefore, do not upgrade the other node in the I/O group for at least 30 minutes after successfully upgrading the first node in the I/O group. If you have other nodes in other I/O groups to upgrade, you can perform that upgrade while you wait the 30 minutes noted above. 16.Repeat steps 2 on page 799 to 15 for each node you want to replace.
802
6423axb.fm
Download, install, and run the latest SVC Software Upgrade Test Utility from http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 to verify there are no known issues with the current cluster environment before beginning the node upgrade procedure. Perform the following steps to add nodes to an existing cluster: 1. Depending on the model of node being added, it may be necessary to upgrade the existing SVC cluster software to a level that supports the hardware model: The model 2145-8G4 requires Version 4.2.x or later. The model 2145-8F4 requires Version 4.1.x or later. The model 2145-8F2 requires Version 3.1.x or later. The 2145-4F2 is the original model and thus is supported by Version 1 through Version 4. It is highly recommended that the existing cluster be upgraded to the latest level of SVC software available; however, the minimum level of SVC cluster software recommended for the 4F2 is Version 3.1.0.5. 2. Install additional nodes and UPSs in a rack. Do not connect to the SAN at this time. 3. Ensure each node being added has a unique WWNN. Duplicate WWNNs may cause serious problems on a SAN and must be avoided. Below is an example of how this could occur: The nodes came from cluster ABC where they were replaced by brand new nodes. The procedure to replace these nodes in cluster ABC required each brand new nodes WWNN to be changed to the old nodes WWNN. Adding these nodes now to the same SAN will cause duplicate WWNNs to appear with unpredictable results. You will need to power up each node separately while disconnected from the SAN and use the front panel to view the current WWNN. If necessary, change it to something unique on the SAN. If required, contact IBM Support for assistance before continuing. 4. Power up additional UPSs and nodes. Do not connect to the SAN at this time. 5. Ensure each node displays Cluster: on the front panel and nothing else. If something other then this is displayed, contact IBM Support for assistance before continuing. 6. Connect additional nodes to LAN. 7. Connect additional nodes to SAN fabric(s). Attention: Do not add the additional nodes to the existing cluster before the zoning and masking steps below are completed or SVC will enter a degraded mode and log errors with unpredictable results. 8. Zone additional node ports in the existing SVC only zone(s). There should be a SVC zone in each fabric with nothing but the ports from the SVC nodes in it. These zones are needed for initial formation of the cluster, as nodes need to see each other to form a cluster. This zone may not exist and the only way the SVC nodes see each other is through a storage zone that includes all the node ports. However, it is highly recommended to have a separate zone in each fabric with just the SVC node ports included to avoid the possibility of the nodes losing communication with each other if the storage zone(s) are changed or deleted. 9. Zone new node ports in existing SVC/Storage zone(s). There should be a SVC/Storage zone in each fabric for each disk subsystem used with SVC. Each zone should have all the SVC ports in that fabric along with all the disk subsystem ports in that fabric that will be used by SVC to access the physical disks.
803
6423axb.fm
Note: There are exceptions when EMC DMX/Symmetrix or HDS storage is involved. For further information, review the SVC Software Installation and Configuration Guide, available at: http://www.ibm.com/storage/support/2145 10.On each disk subsystem seen by the SVC, use its management interface to map LUNs that are currently used by SVC to all the new WWPNs of the new nodes that will be added to the SVC cluster. This is a critical step, as the new nodes must see the same LUNs as the existing SVC cluster nodes see before adding the new nodes to the cluster, otherwise problems may arise. Also note that all SVC ports zoned with the back-end storage must see all the LUNs presented to SVC through all those same storage ports or SVC will mark the devices as degraded. 11.Once all the above is done, then you can add the additional nodes to the cluster using the SVC GUI or CLI and the cluster should not mark anything degraded, as the new nodes will see the same cluster configuration, the same storage zoning, and the same LUNs as the existing nodes. 12.Check the status of the controller(s) and MDisks to ensure there is nothing marked degraded. If there is, then something is not configured properly, and this needs to be addressed immediately before doing anything else to the cluster. If it cannot be determined fairly quickly what is wrong, remove the newly added nodes from the cluster until the problem is resolved. You can contact IBM Support for assistance.
804
6423axb.fm
4. If you had your host shut down, start it again. 5. From each host, issue a rescan of the multipathing software to discover the new paths to the VDisks. 6. See the documentation that is provided with your multipathing device driver for information about how to query paths to ensure that all paths have been recovered. 7. Vary on your file system. 8. Restart the host I/O. 9. Repeat steps 1 to 8 for each vdisk in the cluster that you want to replace.
805
6423axb.fm
5. Install the replacement (new) node in the rack and connect the uninterruptible power supply (UPS) cables and the Fibre Channel cables. 6. Power on the node. 7. Rezone your switch zones to remove the ports of the node that you are replacing from the host and storage zones. Replace these ports with the ports of the replacement node. 8. Add the replacement node to the cluster and I/O group. Important: Both nodes in the I/O group cache data; however, the cache sizes are asymmetric. The replacement node is limited by the cache size of the partner node in the I/O group. Therefore, the replacement node does not utilize the full size of its cache. 9. From each host, issue a rescan of the multipathing software to discover the new paths to VDisks. Note: If your system is inactive, you can perform this step after you have replaced all nodes in the cluster. The host multipathing device drivers take approximately 30 minutes to recover the paths. 10.Refer to the documentation that is provided with your multipathing device driver for information about how to query paths to ensure that all paths have been recovered before proceeding to the next step. 11.Repeat steps 1 to 10 for the partner node in the I/O group. Note: After you have upgraded both nodes in the I/O group, the cache sizes are symmetric and the full 8 GB of cache is utilized. 12.Repeat steps 1 to 11 for each node in the cluster that you want to replace. 13.Resume host I/O.
806
6423axc.fm
Appendix C.
807
6423axc.fm
Performance considerations
When discussing performance for a system, it always comes down to identifying the bottleneck, and thereby the limiting factor of a given system. At the same time, you must take into consideration the component for whose workload you do identify a limiting factor, since it might not be the same component that is identified as the limiting factor for different workloads. When designing a storage infrastructure using SVC, or using an SVC storage infrastructure, you must therefore take into consideration the performance and capacity of your infrastructure, enche monitoring your SVC environment is a key point to be achieved in order to grant and sustain the performance required.
SVC
The SVC cluster is scalable up to eight nodes, and the performance is almost linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back end without overloading a single disk or array. In the following sections, we discuss the performance of the SVC and assume that there are no bottlenecks in the SAN or on the disk subsystem.
Performance monitoring
In this section, we discuss some performance monitoring techniques.
808
6423axc.fm
Statistics gathering is enabled or disabled on a cluster basis. When gathering is enabled, all nodes in the cluster gather statistics. SVC supports sampling periods of the gathering of statistics from 1 to 60 minutes in steps of one minute. Previous versions of SVC used to provide per cluster statistics. These were later superseded by per node statistics, which provide a greater range of information. SVC 5.1.0 onwards only provides per node statistics; per cluster statistics are no longer generated. Customers should use per node statistics instead.
Tip: You can use pscp.exe, which is installed with PuTTY, from an MS-DOS command line prompt to copy these files to local drives. WordPad can be used to open them. For example: C:\Program Files\PuTTY>pscp -load ITSO-CLS1 admin@10.64.210.242:/dumps/iostats/* c:\temp\iostats The -load parameter is used to specify the session defined in PuTTY. After you have saved your performance statistics data files, because they are in .xml format you can format and merge your data in order to get more detail about the performance in your SVC environment.
809
6423axc.fm
An example of an unsupported tool provided as-is, is the SVC Performance Monitor svcmon Users Guide found at: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177 You could also process your statistics data with a spreadsheet application in order to get the report as shown in Figure 9-77 on page 810.
810
6423axc.fm
More information about how to create a performance report based is described in the IBM TPC Reporter for Disk (Utility for anyone running IBM's TotalStorage Productivity Center) available at: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2618 TPC Reporter for Disk is being withdrawn for Tivoli Storage Productivity Center TPC version 4.1. The replacement function for this utility is packaged with TPC version 4.1 in Business Intelligence and Reporting Tools (BIRT).
811
6423axc.fm
812
6423bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see How to get Redbooks on page 814. Note that some of the documents referenced here may be available in softcopy only. IBM System Storage SAN Volume Controller, SG24-6423-05 Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 IBM System Storage: Implementing an IBM SAN, SG24-6116 Introduction to Storage Area Networks, SG24-5470 SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574 SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521 Using the SVC for Business Continuity, SG24-7371 IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547 IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548
Other resources
These publications are also relevant as further information sources: IBM System Storage Open Software Family SAN Volume Controller: Planning Guide, GA22-1052 IBM System Storage Master Console: Installation and Users Guide, GC30-4090 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide, SC26-7541 IBM System Storage Open Software Family SAN Volume Controller: Service Guide, SC26-7542 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543 IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface User's Guide, SC26-7544 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference, SC26-7545 IBM TotalStorage Multipath Subsystem Device Driver User's Guide, SC30-4096 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563
813
6423bibl.fm
6423bibl.fm
ibm.com/services
Related publications
815
6423bibl.fm
816
6423IX.fm
Index
Symbols
) 56 available managed disks 341
B
back-end application 59 background copy 299, 306, 319, 326 background copy bandwidth 331 background copy progress 419, 441 background copy rate 274275 backup 254 of data with minimal impact on production 260 backup speed 255 backup time 254 bandwidth 64, 92, 318, 585, 610 bandwidth impact 331 basic setup requirements 129 bat script 785 bind 251 bitmaps 259 boot 95 boss node 35 bottleneck 49 bottlenecks 9798, 808 budget 26 budget allowance 26 business requirements 97, 808
Numerics
64-bit kernel 56
A
abends 464 abends dump 464 access pattern 517 active quorum disk 36 active SVC cluster 562 add a new volume 168, 173 add a node 388 add additional ports 501 add an HBA 352 Add SSH Public Key 128 administration tasks 493, 558 Advanced Copy Services 90 AIX host system 182 AIX specific information 162 AIX toolbox 182 AIX-based hosts 162 alias 27 alias string 158 aliases 27 analysis 97, 655, 808 application server guidelines 89 application testing 255 assign VDisks 366 assigned VDisk 168, 173 asynchronous 308 asynchronous notifications 277278 Asynchronous Peer-to-Peer Remote Copy 308 asynchronous remote 308 asynchronous remote copy 32, 281, 308309 asynchronous replication 329 asynchronously 308 attributes 527 audit log 40 Authentication 160 authentication 41, 57, 131 authentication service 44 Autoexpand 25 automate tasks 784 automatic Linux system 224 automatic update process 225 automatically discover 340 automatically formatted 52 automatically restarted 645 automation 375 auxiliary 318, 326, 424, 446 auxiliary VDisk 309, 319, 326
C
cable connections 71 cable length 48 cache 38, 267, 309 caching 98 caching capability 97, 808 candidate node 389 capacity 87, 181 capacity information 538 capacity measurement 507 CDB 27 challenge message 30 Challenge-Handshake Authentication Protocol 30 change the IP addresses 383 Channel extender 58 channel extender 61 channels 315 CHAP 30 CHAP authentication 30, 160 CHAP secret 30, 160 check software levels 636 chpartnership 332 chrcconsistgrp 333 chrcrelationship 333 chunks 85, 683 CIM agent 39 CIM Client 38 CIMOM 28, 38, 124, 159
817
6423IX.fm
CLI 124, 433 commands 182 scripting for SVC task automation 375 Cluster 58 cluster 34 adding nodes 559 creation 388, 559 error log 458 IP address 112 shutting down 340, 386, 395, 550 time zone 383384 viewing properties 379, 544 cluster error log 655 Cluster management 38 cluster nodes 34 cluster overview 34 cluster partnership 289, 315 cluster properties 385 clustered ethernet port 160 clustered server resources 34 clusters 64 colliding rites 311 Colliding writes 310 Command Descriptor Block 27 command syntax 377 COMPASS architecture 46 compression 95 concepts 7 concurrent instances 682 concurrent software upgrade 449 configurable warning capacity 25 configuration 153 restoring 672 configuration node 35, 48, 58, 160, 388, 559 configuration rules 52 configure AIX 163 configure SDD 251 configuring the GUI 114 connected 293, 320321 connected state 296, 321, 323 connectivity 37 consistency 281, 310, 322 consistency freeze 296, 304, 323 Consistency Group 58 consistency group 260, 262263 limits 263 consistent 32, 294295, 321322 consistent data set 254 Consistent Stopped state 292, 319 Consistent Synchronized state 292, 320, 599, 626 ConsistentDisconnected 298, 325 ConsistentStopped 296, 323 ConsistentSynchronized 297, 324 constrained link 318 container 85 contingency capacity 25 controller, renaming 339 conventional storage 675 cookie crumbs recovery 467 cooling 65
Copied 58 copy bandwidth 92, 331 copy operation 33 copy process 304, 333 copy rate 265, 275 copy rate parameter 90 copy service 308 Copy Services managing 396, 567 COPY_COMPLETED 277 copying state 402 corruption 255 Counterpart SAN 58 counterpart SAN 58, 61, 99 CPU cycle 49 create a FlashCopy 398 create a new VDisk 505 create an SVC partnership 585, 609 create mapping command 398, 567, 569 create New Cluster 117 create SVC partnership 414, 435 creating a VDisk 354 creating managed disk groups 483 credential caching 45 current cluster state 36 Cygwin 213
D
data backup with minimal impact on production 260 moving and migration 254 data change rates 95 data consistency 308, 397 data corruption 322 data flow 74 data migration 65, 682 data migration and moving 254 data mining 255 data mover appliance 369 database log 313 database update 312 degraded mode 84 delete a FlashCopy 406 a host 352 a host port 354 a port 502 a VDisk 364365, 513, 540 ports 353 Delete consistency group command 406, 579 Delete mapping command 578 dependent writes 262, 286287, 312, 314 destaged 38 destructive 460 detect the new MDisks 340 detected 340 device specific modules 188 differentiator 50 directory protocol 44 dirty bit 299, 326
818
6423IX.fm
disconnected 293, 320321 disconnected state 321 discovering assigned VDisk 168, 173, 190 discovering newly assigned MDisks 481 disk access profile 363 disk controller renaming 478 systems 338 viewing details 338, 477 disk internal controllers 50 disk timeout value 245 disk zone 74 Diskpart 195 display summary information 341 displaying managed disks 490 distance 60, 279 distance limitations 280 DMP 248 documentation 64, 475 DSMs 188 dump I/O statistics 462 I/O trace 462 listing 460, 663 other nodes 463 durability 50 dynamic pathing 248249 dynamic shrinking 536 dynamic tracking 164
excludes 481 Execute Metro Mirror 418, 440 expand a VDisk 178, 194, 365 a volume 195 expand a space-efficient VDisk 365 expiry timestamp 44 expiry timestamps 45 extended distance solutions 279 Extent 59 extent 85, 676 extent level 676 extent sizes 85
F
fabric remote 99 fabric interconnect 60 factory WWNN 798 failover 59, 248, 309 failover only 228 failover situation 280 fan-in 59 fast fail 164 fast restore 255 FAStT 279 FC optical distance 48 feature log 662 feature, licensing 659 features, licensing 460 featurization log 461 Featurization Settings 121 Fibre Channel interfaces 47 Fibre Channel port fan in 61, 99 Fibre Channel Port Login 28 Fibre Channel port logins 59 Fibre Channel ports 71 file system 231 filtering 377, 470 filters 377 fixed error 459, 655 FlashCopy 33, 254 bitmap 264 how it works 255, 259 image mode disk 268 indirection layer 263 mapping 255 mapping events 269 rules 268 serialization of I/O 276 synthesis 275 FlashCopy indirection layer 263 FlashCopy mapping 260, 269 FlashCopy mapping states 271 Copying 272 Idling/Copied 272 Prepared 273 Preparing 272 Stopped 272 Suspended 272 Index
E
elapsed time 90 empty MDG 343 empty state 299, 326 Enterprise Storage Server (ESS) 279 entire VDisk 260 error 296, 320, 323, 343, 458, 655 Error Code 58, 646 error handling 276 Error ID 58 error log 458, 655 analyzing 655 file 645 error notification 457, 647 error number 646 error priority 656 ESS 44 ESS (Enterprise Storage Server) 279 ESS server 44 ESS to SVC 687 ESS token 44 eth0 48 eth0 port 48 eth1 48 Ethernet 71 Ethernet connection 72 event 458, 655 event log 461 events 291, 319 Excluded 59
819
6423IX.fm
FlashCopy mappings 263 FlashCopy properties 263 FlashCopy rate 90 flexibility 97, 808 flush the cache 573 forced deletion 501 foreground I/O latency 331 format 506, 510, 515, 522 free extents 364 front-end application 59 FRU 59 Full Feature Phase 27
host adapter configuration settings 184 host bus adapter 348 Host ID 59 host workload 526 housekeeping 476, 543 HP-UX support information 248249
I
I/O budget 26 I/O Governing 26 I/O governing 26, 363, 517 I/O governing rate 363 I/O Group 60 I/O group 37, 60 name 473 renaming 391, 558 viewing details 391 I/O pair 67 I/O per secs 64 I/O statistics dump 462 I/O trace dump 462 ICAT 3839 identical data 318 idling 297, 324 idling state 304, 334 IdlingDisconnected 297, 325 Image Mode 60 image mode 526, 685 image mode disk 268 image mode MDisk 685 image mode to image mode 705 image mode to managed mode 700 image mode VDisk 680 image mode virtual disks 88 inappropriate zoning 82 inconsistent 294, 321 Inconsistent Copying state 292, 320 Inconsistent Stopped state 292, 319, 598599, 626 InconsistentCopying 296, 323 InconsistentDisconnected 298, 325 InconsistentStopped 296, 323 index number 666 Index/Secret/Challenge 30 indirection layer 263 indirection layer algorithm 265 informational error logs 277 initiator 158 initiator name 27 input power 386 install 63 insufficient bandwidth 275 integrity 262, 287, 313 interaction with the cache 267 intercluster communication and zoning 315 intercluster link 289, 316 intercluster link bandwidth 331 intercluster link maintenance 289290, 316 intercluster Metro Mirror 279, 308 intercluster zoning 289290, 316 Internet Storage Name Service 30, 59, 159
G
gateway IP address 112 GBICs 60 general housekeeping 476, 543 generating output 378 generator 127 geographically dispersed 279 Global Mirror guidelines 93 Global Mirror protocol 32 Global Mirror relationship 312 Global Mirror remote copy technique 308 GM 308 gminterdelaysimulation 328 gmintradelaysimulation 328 gmlinktolerance 328329 governing 26 governing rate 26 governing throttle 517 graceful manner 390 grain 59, 264, 276 grain sizes 90 grains 90, 275 granularity 260 GUI 114, 130
H
Hardware Management Console 39 hardware nodes 46, 56 hardware overview 46 hash function 30 HBA 59, 84, 348 HBA fails 84 HBA ports 89 heartbeat signal 37 heartbeat traffic 92 help 475, 543 high availability 34, 64 home directory 182 host and application server guidelines 89 configuration 153 creating 348 deleting 500 information 494 showing 373 systems 74
820
6423IX.fm
interswitch link (ISL) 61 interval 385 intracluster Metro Mirror 279, 308 IP address modifying 383, 544 IP addresses 6465, 544 IP subnet 72 ipconfig 136 IPv4 135 ipv4 and 48 IPv4 stack 140 IPv6 135 IPv6 address 139 IPv6 addresses 136 IPv6 connectivity 137 IQN 27, 59, 158 iSCSI 26, 48, 64, 159 iSCSI Address 27 iSCSI client 158 iSCSI IP address failover 160 iSCSI Multipathing 31 iSCSI Name 27 iSCSI node 27 iSCSI protocol 56 iSCSI Qualified Name 27, 59 iSCSI support 5657 iSCSI target node failover 160 ISL (interswitch link) 61 ISL hop count 279, 308 iSNS 30, 59, 159 issue CLI commands 213 ivp6 48
list the dumps 663 listing dumps 460, 663 Load balancing 228 Local authentication 40 local cluster 301, 328 Local fabric 60 local fabric interconnect 60 Local users 42 log 313 logged 458 Logical Block Address 299, 326 logical configuration data 464 logical unit numbers 341 Login Phase 27 logs 313 lsrcrelationshipcandidate 332 LU 60 LUNs 60
M
magnetic disks 49 maintenance levels 184 maintenance procedures 645 maintenance tasks 448, 636 Managed 60 Managed disk 60 managed disk 60, 479 displaying 490 working with 477 managed disk group 345 creating 483 viewing 486 Managed Disks 60 managed mode MDisk 685 managed mode to image mode 702 managed mode virtual disk 88 management 97, 808 map a VDisk 515 map a VDisk to a host 366 mapping 259 mapping events 269 mapping state 269 Master 60 master 318, 326 master console 65, 71 master VDisk 319, 326 maximum supported configurations 57 MC 60 MD5 checksum hash 30 MDG 60 MDG information 538 MDG level 344 MDGs 65 MDisk 60, 64, 479, 490 adding 343, 488 discovering 340, 481 including 343, 481 information 479 modes 685 name parameter 341 Index
J
Jumbo Frames 30
K
kernel level 225 key 160 key files on AIX 182
L
LAN Interfaces 48 last extent 686 latency 32, 92 LBA 299, 326 license 112 license feature 659 licensing feature 460 licensing feature settings 460, 659 limiting factor 97, 808 link errors 47 Linux 182 Linux kernel 35 Linux on Intel 224 list dump 460 list of MDisks 491 list of VDisks 492
821
6423IX.fm
removing 347, 489 renaming 342, 480 showing 371, 491, 537 showing in group 344 MDisk group creating 346, 483 deleting 347, 487 name 473 renaming 346, 486 showing 344, 372, 482, 538 viewing information 346 MDiskgrp 60 Metro Mirror 279 Metro Mirror consistency group 302303, 305306, 332336 Metro Mirror features 281, 309 Metro Mirror process 290, 317 Metro Mirror relationship 302, 304, 306, 311, 332333, 336, 597, 624 microcode 37 Microsoft Active Directory 43 Microsoft Cluster 194 Microsoft Multi Path Input Output 188 migrate 675 migrate a VDisk 680 migrate between MDGs 680 migrate data 685 migrate VDisks 368 migrating multiple extents 676 migration algorithm 683 functional overview 682 operations 676 overview 676 tips 687 migration activities 676 migration phase 526 migration process 369 migration progress 681 migration threads 676 mirrored 309 mirrored copy 308 mirrored VDisks 53 mkpartnership 331 mkrcconsistgrp 332 mkrcrelationship 332 MLC 49 modify a host 351 modifying a VDisk 362 mount 231 mount point 231 moving and migrating data 254 MPIO 89, 188 MSCS 194 MTU sizes 30, 159 multi layer cell 49 multipath configuration 165 multipath I/O 89 multipath storage solution 188 multipathing device driver 89
Multipathing drivers 31 multiple disk arrays 97, 808 multiple extents 676 multiple paths 31 multiple virtual machines 239
N
network bandwidth 95 Network Entity 158 Network Portals 158 new code 645 new disks 170, 176 new mapping 366 Node 61 node 35, 60, 387 adding 388 adding to cluster 559 deleting 390 failure 276 port 59 renaming 389 shutting down 390 using the GUI 558 viewing details 387 node details 387 node discovery 666 node dumps 463 node level 387 Node Unique ID 35 nodes 64 non-preferred path 248 non-redundant 58 non-zero contingency 25 N-port 61
O
offline rules 679 offload features 30 older disk systems 98 on screen content 377, 470, 543 online help 475, 543 on-screen content 377 OpenSSH 182 OpenSSH client 213 operating system versions 184 ordering 32, 262 organizing on-screen content 377 other node dumps 463 overall performance needs 64 Oversubscription 61 oversubscription 61 overwritten 259, 455
P
package numbering and version 448, 636 parallelism 682 partial extents 25 partial last extent 686
822
6423IX.fm
partnership 289, 315, 328 passphrase 127 path failover 248 path failure 277 path offline 277 path offline for source VDisk 277 path offline for target VDisk 277 path offline state 277 path-selection policy algorithms 228 peak 331 peak workload 92 pended 26 per cluster 682 per managed disk 682 performance 87 performance advantage 97, 808 performance boost 45 performance considerations 808 performance improvement 97, 808 performance monitoring tool 93 performance requirements 64 performance scalability 34 performance statistics 93 performance throttling 517 physical location 65 physical planning 65 physical rules 67 physical site 65 Physical Volume Links 249 PiT 34 PiT consistent data 254 PiT copy 264 PiT semantics 262 planning rules 64 plink 784 PLOGI 28 Point in Time 34 point in time 34 point-in-time copy 295, 322 policy decision 300, 327 port adding 352, 501 deleting 353, 502 port binding 251 port mask 8990 Power Systems 182 Powerware 67 PPRC background copy 299, 306, 326 commands 301, 327 configuration limits 327 detailed states 295, 323 preferred access node 87 preferred path 248 pre-installation planning 64 Prepare 61 prepare (pre-trigger) FlashCopy mapping command 400 PREPARE_COMPLETED 277 preparing volumes 173, 178 pre-trigger 400
primary 309, 424, 446 primary copy 326 priority 368 priority setting 368 private key 124, 127, 182, 784 production VDisk 326 provisioning 331 pseudo device driver 165 public key 124, 127, 182, 784 PuTTY 39, 124, 129, 387 CLI session 133 default location 127 security alert 134 PuTTY application 133, 390 PuTTY Installation 213 PuTTY Key Generator 127128 PuTTY Key Generator GUI 125 PuTTY Secure Copy 451 PuTTY session 127, 134 PuTTY SSH client software 213 PVLinks 249
Q
QLogic HBAs 225 Queue Full Condition 26 quiesce 387 quiesce time 573 quiesced 804 quorum 35 quorum candidates 36 Quorum Disk 35 quorum disk 36, 666 setting 666 quorum disk candidate 36 quorum disks 25
R
RAID 61 RAID controller 74 RAMAC 49 RAS 61 read workload 52 real capacity 25 real-time synchronized 279280 reassign the VDisk 368 recall commands 338, 377 recommended levels 636 Redbooks Web site 814 Contact us xxvii redundancy 48, 92 redundant 58 Redundant SAN 61 redundant SAN 61 redundant SVC 563 relationship 260, 308, 318 relationship state diagram 291, 319 reliability 87 Reliability Availability and Serviceability 61 Remote 61
Index
823
6423IX.fm
Remote authentication 40 remote cluster 60 remote fabric 60, 99 interconnect 60 Remote users 43 remove a disk 210 remove a VDisk 182 remove an MDG 347 remove WWPN definitions 353 rename a disk controller 478 rename an MDG 486 rename an MDisk 480 renaming an I/O group 558 repartitioning 87 rescan disks 192 restart the cluster 387 restart the node 391 restarting 422, 445 restore points 256 restore procedure 672 Reverse FlashCopy 34, 256 reverse FlashCopy 56 RFC3720 27 rmrcconsistgrp 335 rmrcrelationship 335 round robin 87, 228, 248
S
sample script 787 SAN Boot Support 248, 250 SAN definitions 99 SAN fabric 74 SAN planning 71 SAN Volume Controller 61 documentation 475 general housekeeping 476, 543 help 475, 543 virtualization 38 SAN Volume Controller (SVC) 60 SAN zoning 123 SATA 94 scalable 98, 808 scalable architecture 51 SCM 50 scripting 300, 327, 375 scripts 195, 783 SCSI 61 SCSI Disk 60 SCSI primitives 340 SDD 89, 165, 172, 177, 250 SDD (Subsystem Device Driver) 172, 177, 225, 250, 689 SDD Dynamic Pathing 248 SDD installation 166 SDD package version 165, 186 SDDDSM 188 secondary 309 secondary copy 326 secondary site 64 secure data flow 123 secure session 390
Secure Shell (SSH) 124 Secure Shell connection 38 separate physical IP networks 48 sequential 88, 354, 506, 510, 522, 532 serial numbers 168, 175 serialization 276 serialization of I/O by FlashCopy 276 Service Location Protocol 30, 61, 159 service, maintenance using the GUI 635 set attributes 527 set the cluster time zone 549 set up Metro Mirror 412, 433, 583, 608 SEV 56, 363 shells 375 show the MDG 538 show the MDisks 537 shrink a VDisk 536 shrinking 536 shrinkvdisksize 369 shut down 194 shut down a single node 390 shut down the cluster 386, 550 Simple Network Management Protocol 300, 327, 343 single layer cell 49 single point of failure 61 single sign on 57 single sign-on 39, 44 site 65 SLC 49 SLP 30, 61, 159 SLP daemon 30 SNIA 2 SNMP 300, 327, 343 SNMP alerts 481 SNMP manager 457 SNMP trap 277 software upgrade 448, 636, 638 software upgrade packages 636 Solid State Disk 56 Solid State Drive 35 Solid State Drives 46 solution 97 sort 473 sort criteria 473 sorting 473 source 275, 326 space-efficient 357 Space-efficient background copy 317 space-efficient VDisk 370, 526 space-efficient VDisks 509 Space-Efficient Virtual Disk 56 space-efficient volume 369 special migration 686 split per second 90 splitting the SAN 61 SPoF 61 spreading the load 87 SSD 51 SSD market 50 SSD solution 50
824
6423IX.fm
SSD storage 52 SSH 38, 123, 784 SSH (Secure Shell) 124 SSH Client 39 SSH client 182, 213 SSH client software 124 SSH key 41 SSH keys 124, 129 SSH server 123124 SSH-2 124 SSO 44 SSPC 39, 62 stack 684 stand-alone Metro Mirror relationship 417, 439 start (trigger) FlashCopy mapping command 402403, 574575 start a PPRC relationship command 304, 333334 startrcrelationship 333 state 295296, 323 connected 293, 320 consistent 294295, 321322 ConsistentDisconnected 298, 325 ConsistentStopped 296, 323 ConsistentSynchronized 297, 324 disconnected 293, 320 empty 299, 326 idling 297, 324 IdlingDisconnected 297, 325 inconsistent 294, 321 InconsistentCopying 296, 323 InconsistentDisconnected 298, 325 InconsistentStopped 296, 323 overview 291, 320 synchronized 295, 322 state fragments 294, 321 state overview 293, 327 state transitions 277, 320 states 269, 275, 291, 319 statistics 385 statistics collection 546 starting 546 stopping 385, 547 statistics dump 462 stop 320 stop FlashCopy consistency group 405, 576 stop FlashCopy mapping command 404 STOP_COMPLETED 278 stoprcconsistgrp 334 stoprcrelationship 334 storage cache 37 storage capacity 64 Storage Class Memory 50 stripe VDisks 97, 808 striped 506, 510, 522, 532 striped VDisk 354 subnet mask IP address 112 Subsystem Device Driver (SDD) 172, 177, 225, 250, 689 Subsystem Device Driver DSM 188 SUN Solaris support information 248 superuser 380
surviving node 390 suspended mapping 404 SVC basic installation 109 task automation 375 SVC cluster 559 SVC cluster candidates 585, 610 SVC cluster partnership 301, 328 SVC cluster software 639 SVC configuration 64 backing up 668 deleting the backup 672 restoring 672 SVC Console 39 SVC device 62 SVC GUI 39 SVC installations 84 SVC master console 124 SVC node 37, 84 SVC PPRC functions 281 SVC setup 154 SVC SSD storage 52 SVC superuser 41 svcinfo 338, 342, 377 svcinfo lsfreeextents 681 svcinfo lshbaportcandidate 352 svcinfo lsmdiskextent 681 svcinfo lsmigrate 681 svcinfo lsVDisk 371 svcinfo lsVDiskextent 681 svcinfo lsVDiskmember 371 svctask 338, 342, 377, 379 svctask chlicense 460 svctask finderr 454 svctask mkfcmap 301304, 328, 331334, 398, 567, 569 switching copy direction 424, 446, 606, 632 switchrcconsistgrp 336 switchrcrelationship 336 symmetrical 1 symmetrical network 61 symmetrical virtualization 1 synchronized 295, 318, 322 synchronized clocks 45 synchronizing 318 synchronous data mirroring 56 synchronous reads 684 synchronous writes 684 synthesis 275 Syslog error event logging 57 System Storage Productivity Center 62
T
T0 34 target 158, 326 target name 27 test new applications 255 threads parameter 519 threshold level 26 throttles 517 throttling parameters 517 Index
825
6423IX.fm
tie breaker 36 tie-break situations 36 tie-break solution 666 tie-breaker 36 time 383 time zone 383384 timeout 245 timestamp 4445 Time-Zero copy 34 Tivoli Directory Server 43 Tivoli Embedded Security Services 40, 44 Tivoli Integrated Portal 39 Tivoli Storage Productivity Center 39 Tivoli Storage Productivity Center for Data 39 Tivoli Storage Productivity Center for Disk 39 Tivoli Storage Productivity Center for Replication 39 Tivoli Storage Productivity Center Standard Edition 39 token 4445 token expiry timestamp 45 token facility 44 trace dump 462 traffic 92 traffic profile activity 64 transitions 685 trigger 402403, 574575
U
unallocated capacity 197 unallocated region 317 unassign 514 unconfigured nodes 388 undetected data corruption 322 unfixed error 459, 655 uninterruptible power supply 67, 71, 84, 386 unmanaged MDisk 685 unmap a VDisk 368 up2date 224 updates 224 upgrade 636637 upgrade precautions 449 upgrading software 636 use of Metro Mirror 299, 326 used capacity 25 used free capacity 25 User account migration 39 using SDD 172, 177, 225, 250
mapped to this host 367 migrating 89, 368, 518 modifying 362, 517 path offline for source 277 path offline for target 277 showing 492 showing for MDisk 370, 482 showing map to a host 539 showing using group 371 shrinking 369, 519 working with 354 VDisk discovery 159 VDisk mirror 526 VDisk Mirroring 52 VDisk-to-host mapping 368 deleting 514 Veritas Volume Manager 248 View I/O Group details 391 viewing managed disk groups 486 virtual disk 260, 354, 466, 504 Virtual Machine File System 239 virtualization 38 Virtualization Limit 121 VLUN 60 VMFS 239241 VMFS datastore 243 volume group 178 Voting Set 35 voting set 3536 vpath configured 170, 176
W
warning capacity 25 warning threshold 370 Web interface 251 Windows 2000 based hosts 183 Windows 2000 host configuration 183, 237 Windows 2003 188 Windows host system CLI 213 Windows NT and 2000 specific information 183 working with managed disks 477 workload cycle 93 worldwide node name 798 worldwide port name 165 Write data 38 Write ordering 322 write ordering 285, 312, 322 write through mode 84 write workload 93 writes 313 write-through mode 38 WWNN 798 WWPNs 165, 348, 353, 497
V
VDisk 490 assigning 515 assigning to host 366 creating 354, 356, 505 creating in image mode 357, 526 deleting 364, 509, 513 discovering assigned 168, 173, 190 expanding 365 I/O governing 362 image mode migration concept 685 information 356, 504
Y
YaST Online Update 224
826
6423IX.fm
Z
zero buffer 317 zero contingency 25 Zero Detection 56 zero-detection algorithm 25 zone 74 zoning capabilities 74 zoning recommendation 193, 207
Index
827
6423IX.fm
828
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
6423spine.fm
829
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
6423spine.fm
830
Back cover