You are on page 1of 860

Front cover

Draft Document for Review January 12, 2010 12:00 pm SG24-6423-07

Implementing the IBM System Storage SAN Volume Controller V5.1


Install, use, and troubleshoot the SAN Volume Controller Learn about and how to attach iSCSI hosts Understand what SSD has to offer

Jon Tate Pall Beck Angelo Bernasconi Werner Eggli

ibm.com/redbooks

Draft Document for Review January 12, 2010 4:41 pm

6423edno.fm

International Technical Support Organization SAN Volume Controller V5.1 October 2009

SG24-6423-07

6423edno.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: Before using this information and the product it supports, read the information in Notices on page xxi.

Eighth Edition (October 2009) This edition applies to Version 5 Release 1 Modification 0 of the IBM System Storage SAN Volume Controller and is based on pre-GA versions of code. This document created or updated on January 12, 2010. Note: This book is based on a pre-GA version of a product and may not apply when the product becomes generally available. We recommend that you consult the product documentation or follow-on versions of this redbook for more current information.

Copyright International Business Machines Corporation 2009. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Draft Document for Review January 12, 2010 4:41 pm

6423edno.fm

6423edno.fm

Draft Document for Review January 12, 2010 4:41 pm

vi

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423TOC.fm

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii October 2009, Eighth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 What is storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 User requirements that drive storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 5 6

Chapter 2. IBM System Storage SAN Volume Controller overview . . . . . . . . . . . . . . . . 7 2.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 SVC Virtualization Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2 MDisk Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.3 VDisk Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.4 Image Mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.5 Managed Mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.6 Cache Mode and Cache Disabled VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.7 Mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.8 Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.9 VDisk I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.10 iSCSI Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.11 Usage of IP Addresses and Ethernet ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.12 iSCSI VDisk Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.13 iSCSI Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.14 iSCSI Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.15 Advanced Copy Services overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.16 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3 SVC cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.3.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3.4 Cluster management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.5 User Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.3.6 SVC roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.7 SVC local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3.8 SVC remote authentication and Single Sign On . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.4 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.4.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4.2 LAN Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.5 Solid State Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.5.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Copyright IBM Corp. 2009. All rights reserved.

vii

6423TOC.fm

Draft Document for Review January 12, 2010 4:41 pm

2.5.2 SSD solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 SSD market. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 SSD in the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 SSD configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 SVC 5.1 supported hardware list, device driver and firmware levels . . . . . . . . . . 2.6.3 What was new with SVC 4.3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 What is new with SVC 5.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Useful SVC Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Preparing your UPS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Managed Disk Group configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.7 Virtual Disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.8 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.9 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.10 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.11 Data migration from non-virtualized storage subsystem . . . . . . . . . . . . . . . . . . . 3.3.12 SVC configuration back-up procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. SVC initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Systems Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 SSPC hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 SVC installation planning information for SSPC . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 SVC Hardware Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 SVC installation planning information for the HMC . . . . . . . . . . . . . . . . . . . . . . . 4.4 SVC Cluster Set Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Creating the cluster (first time) using the service panel . . . . . . . . . . . . . . . . . . . 4.4.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Initial configuration using the service panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Adding the cluster to the SSPC or the SVC HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Secure Shell overview and CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 4.6.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . .

50 50 51 51 54 55 55 57 57 58 63 64 65 66 67 71 71 72 74 78 81 84 85 87 89 90 95 96 96 97 97 97 98 99

101 102 102 105 106 107 108 108 109 109 112 112 114 114 123 125 128

viii

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423TOC.fm

4.6.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Upgrading the SVC Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 FC/SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 IQN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 VDisk discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.5.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Configuring for fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Subsystem Device Driver (SDDPCM or SDD) . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3 . . . . . . . . . . . . . . 5.5.7 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD . . . . . . . . . 5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM . . . . . . . . . . . . . 5.5.10 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . 5.5.12 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.13 Removing an SVC volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.14 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 5.6 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Configuring Windows 2000, Windows 2003, and Windows 2008 hosts . . . . . . . 5.6.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Hardware lists, device driver, HBAs and firmware levels . . . . . . . . . . . . . . . . . . 5.6.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 5.6.6 SDD driver installation on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.7 SDDDSM driver installation on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Discovering the assigned VDisk in Windows 2000 / 2003 . . . . . . . . . . . . . . . . . . . . . 5.7.1 Extending a Windows 2000 or 2003 volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Example configuration of attaching an SVC to a Windows 2008 host . . . . . . . . . . . . 5.8.1 Installing SDDDSM on a Windows 2008 host . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Installing SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.3 Attaching SVC VDisks to Windows 2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.4 Extending a Windows 2008 Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.5 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.2 System requirements for the IBM System Storage hardware provider . . . . . . .

129 133 135 135 136 140 141 153 154 154 157 158 158 158 158 159 160 162 163 163 163 164 165 168 172 173 173 177 178 178 182 182 183 183 183 183 184 186 186 188 190 194 199 199 202 204 210 210 213 214 215 215

Contents

ix

6423TOC.fm

Draft Document for Review January 12, 2010 4:41 pm

5.10.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . 5.10.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . 5.10.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Linux (on Intel) specific information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.6 Creating and preparing SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.7 Using the operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . . 5.12 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.12.3 Guest operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.4 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.5 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.6 VMware storage and zoning recommendations . . . . . . . . . . . . . . . . . . . . . . . . 5.12.7 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . 5.12.8 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.9 Attaching VMware to VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.10 VDisk naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.11 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . 5.12.12 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.13 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13 SUN Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.13.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14 HP-UX configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.14.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.3 Co-existence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.4 Using an SVC VDisk as a cluster lock disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.14.5 Support for HP-UX greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.15 Using SDDDSM, SDDPCM, and SDD Web interface . . . . . . . . . . . . . . . . . . . . . . . . 5.16 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.17 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.17.1 IBM Redbook publications containing SVC storage subsystem attachment guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Business requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Moving and migrating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.4 Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.5 Application testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.6 SVC FlashCopy features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 FlashCopy and TSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

215 219 220 221 224 224 224 224 225 225 230 232 232 237 237 237 237 237 238 239 240 241 241 244 245 245 247 248 248 248 249 249 249 249 250 250 250 251 252 252 253 254 254 254 254 255 255 255 256 257 259

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423TOC.fm

6.4 Implementation of SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Interaction and dependency between MTFC . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 6.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.9 FlashCopy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.10 FlashCopy and image mode disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.11 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.12 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.13 Space-efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.14 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.15 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.17 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 6.4.20 Recovering data from FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 SVC Metro Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Multi-Cluster-Mirroring (MCM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.5 Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.6 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.7 How Metro Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.8 Metro Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.9 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.10 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.11 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.12 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 6.5.14 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Creating SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.6 Changing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.10 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.12 Deleting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.14 Reversing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

260 260 261 262 263 264 265 267 267 268 268 269 271 274 275 275 276 276 277 278 278 279 279 280 281 281 285 285 289 290 290 293 295 299 300 300 301 301 301 302 302 303 303 304 304 305 305 305 306 306 306 306 308 xi

6423TOC.fm

Draft Document for Review January 12, 2010 4:41 pm

6.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Remote copy techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.1 Global Mirror relationship between primary and secondary VDisk . . . . . . . . . . . 6.9.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.3 Dependent writes that span multiple VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9.4 Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 How Global Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.1 Intercluster communication and zoning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.2 SVC Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.3 Maintenance of the intercluster link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.4 Distribution of work amongst nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.5 Background Copy Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.6 Space-efficient background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.2 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.3 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.4 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 6.11.6 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.6 Changing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.12 Deleting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.14 Reversing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 7. SVC operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 7.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.8 Adding MDisks to an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.9 Showing the MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

308 308 308 308 309 311 312 312 312 314 315 315 315 316 316 317 317 317 318 320 323 326 327 327 327 328 331 332 332 333 333 333 334 334 334 335 335 336 336 337 338 338 338 338 339 340 340 341 342 343 343 344

xii

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423TOC.fm

7.2.10 Showing MDisks in an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.11 Working with managed disk groups (MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.12 Creating MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.13 Viewing MDisk group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.14 Renaming an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.15 Deleting an MDisk group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.16 Removing MDisks from an MDisk group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Creating a Fibre Channel attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Creating an iSCSI attached host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Creating a Space efficient VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Adding a mirrored VDisk copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.7 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.9 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.10 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.11 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.12 Showing VDisks-to host-mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.13 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.14 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.15 Migrate a VDisk to an image mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.16 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.17 Showing a VDisk on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.18 Showing VDisks using a MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.19 Showing from what MDisks the VDisk has its extents . . . . . . . . . . . . . . . . . . . 7.4.20 Showing from what MDisk group a VDisk has its extents . . . . . . . . . . . . . . . . . 7.4.21 Showing the host to which the VDisk is mapped to. . . . . . . . . . . . . . . . . . . . . . 7.4.22 Showing the VDisk to which the host is mapped to. . . . . . . . . . . . . . . . . . . . . . 7.4.23 Tracing a VDisk from a host back to its physical disk . . . . . . . . . . . . . . . . . . . . 7.5 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Organizing on screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Managing the cluster using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.3 Cluster Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.7 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.8 Start statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.9 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.10 Status of copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

344 344 345 346 346 347 347 347 348 349 351 352 352 353 354 354 356 356 357 358 361 362 363 364 365 366 367 368 368 369 369 370 371 371 372 373 373 374 375 377 377 377 379 379 379 380 380 383 383 383 385 385 386 xiii

6423TOC.fm

Draft Document for Review January 12, 2010 4:41 pm

7.7.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.1 Viewing I/O group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.2 Renaming an I/O group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9.4 Listing I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10.4 Audit Log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.3 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 7.11.6 Preparing (pre-triggering) the FlashCopy consistency group . . . . . . . . . . . . . . 7.11.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.8 Starting (triggering) FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . 7.11.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.11 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.13 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.14 Migrate a VDisk to a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2. . . . . . 7.12.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . 7.12.4 Creating the Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.5 Creating stand-alone Metro Mirror relationship for MM_App_Pri . . . . . . . 7.12.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.7 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . 7.12.11 Stopping a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . 7.12.12 Restarting a Metro Mirror relationship in the Idling state . . . . . . . . . . . . . 7.12.13 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . 7.12.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . 7.12.16 Switching copy direction for a Metro Mirror consistency group . . . . . . . 7.12.17 Creating an SVC partnership between many clusters . . . . . . . . . . . . . . . . . . 7.12.18 Start Configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
SAN Volume Controller V5.1

386 387 387 388 389 390 390 391 391 391 392 393 393 393 395 395 395 396 396 397 398 398 400 401 402 403 403 404 405 406 406 407 411 411 412 413 414 415 416 417 418 419 419 421 421 421 422 423 424 424 425 426 427

Draft Document for Review January 12, 2010 4:41 pm

6423TOC.fm

7.13 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 7.13.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . . 7.13.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.6 Creating the Stand-alone Global Mirror relationship for GM_App_Pri . . . . . . . 7.13.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 7.13.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 7.13.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 7.13.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 7.13.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.13.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 7.13.18 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 7.14 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.4 Set Syslog Event Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.5 Configuring error notification using an email server . . . . . . . . . . . . . . . . . . . . . 7.14.6 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.9 Backing up the SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.10 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14.11 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.16 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. SVC operations using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Organizing on screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 General housekeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Viewing progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.7 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.8 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.9 Showing a VDisk using a certain MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Working with managed disk groups (MDG). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Viewing MDisk group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

433 434 435 436 438 438 439 440 440 441 441 443 443 444 445 445 446 446 447 448 448 454 457 457 458 458 460 460 464 466 466 466 467 469 470 470 475 475 476 476 477 477 478 479 479 479 480 481 481 482 483 483

Contents

xv

6423TOC.fm

Draft Document for Review January 12, 2010 4:41 pm

8.3.2 Creating MDisk groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Renaming an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Deleting an MDisk group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 Adding MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Removing MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.7 Displaying MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.8 Showing MDisks in this group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.9 Showing the VDisks associated with an MDisk group . . . . . . . . . . . . . . . . . . . . 8.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Fibre Channel attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 iSCSI attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.7 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.8 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Working with virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Using the Virtual Disks window for VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Creating a space-efficient VDisk with auto-expand. . . . . . . . . . . . . . . . . . . . . . . 8.5.5 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.6 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.7 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.8 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.9 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.10 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.11 Migrating a VDisk to an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.12 Creating a VDisk Mirror from an existing VDisk . . . . . . . . . . . . . . . . . . . . . . . . 8.5.13 Creating a mirrored VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.14 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.15 Creating an image mode mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.16 Migrating to a space-efficient VDisk using VDisk mirroring. . . . . . . . . . . . . . . . 8.5.17 Deleting a VDisk Copy from a VDisk mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.18 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.19 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.20 Showing the MDisks used by a VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.21 Showing the MDG to which a VDisk belongs . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.22 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 8.5.23 Showing capacity information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.24 Showing VDisks mapped to a particular host . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.25 Deleting VDisks from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Working with SSD drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 SSD introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 SVC Advanced operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Organizing on screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.3 Starting the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.4 Stopping the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.5 Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
SAN Volume Controller V5.1

483 486 487 488 489 490 491 492 493 494 495 495 497 499 500 501 502 504 504 504 505 509 513 514 514 515 517 518 519 521 523 526 529 532 534 535 536 537 538 538 538 539 540 540 540 543 543 544 544 544 546 547 548

Draft Document for Review January 12, 2010 4:41 pm

6423TOC.fm

8.8.6 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.7 Setting the cluster time and configuring NTP server. . . . . . . . . . . . . . . . . . . . . . 8.8.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Manage authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.1 Modify current user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.2 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.3 Modifying a user role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.4 Deleting a user role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.5 User Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.6 Cluster password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.7 Remote authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10 Working with nodes using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.1 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.2 Renaming an I/O group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.3 Adding nodes to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.10.4 Configuring iSCSI ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.12 FlashCopy operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.2 Preparing (pre-triggering) the FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.3 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.4 Starting (triggering) a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . 8.13.5 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.6 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.7 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.8 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13.9 Migration between a fully allocated VDisk and Space-efficient VDisk . . . . . . . 8.13.10 Reversing and splitting a FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . 8.14 Metro Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.2 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.3 Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . 8.14.4 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri . . . . 8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 8.14.7 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.8 Starting a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 8.14.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.11 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.12 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 8.14.13 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.14 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 8.14.15 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . . 8.14.16 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.14.17 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 8.14.18 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 8.15 Global Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . . 8.15.3 Global Mirror link tolerance and delay simulations . . . . . . . . . . . . . . . . . . . . . . 8.15.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

548 549 550 552 553 554 555 556 557 557 558 558 558 558 559 563 567 567 567 569 573 574 575 576 576 578 579 580 580 582 582 584 585 587 590 594 597 597 598 599 599 600 600 602 603 604 605 606 608 609 609 612 614 xvii

6423TOC.fm

Draft Document for Review January 12, 2010 4:41 pm

8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri . . . . 8.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 8.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 8.15.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.12 Stopping a stand-alone Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . 8.15.13 Stopping a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.14 Restarting a Global Mirror Relationship in the Idling state . . . . . . . . . . . . . . . 8.15.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 8.15.16 Changing copy direction for Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.15.17 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 8.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.1 Package numbering and version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.2 Upgrade status utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.3 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.4 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.5 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.6 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.7 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.8 Set Syslog Event Notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.9 Set e-mail features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.10 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.11 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.12 Viewing the license settings log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.13 Dumping the cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.14 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.17.15 Setting up a quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18 Backing up the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.1 Backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.2 Saving the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.3 Restoring the SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.4 Deleting the configuration backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.5 Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.18.6 CIMOM Log Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Migrating multiple extents (within an MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 9.2.3 Migrating a VDisk between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Migrating the VDisk to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Migrating a VDisk between I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Migrating data from an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Image mode VDisk migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

617 620 624 624 625 626 627 627 628 630 631 632 634 636 636 636 636 637 638 639 645 647 649 651 655 659 662 663 663 666 668 669 670 672 672 672 673 675 676 676 676 677 678 680 680 681 682 682 683 683 685 685

xviii

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423TOC.fm

9.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Windows 2008 host system connected directly to the DS4700 . . . . . . . . . . . . . 9.5.2 SVC added between the host system and the DS4700 . . . . . . . . . . . . . . . . . . . 9.5.3 Put the migrated disks on a Windows 2008 host online . . . . . . . . . . . . . . . . . . . 9.5.4 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . . 9.5.5 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . . 9.5.6 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . . 9.5.7 Free the data from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.8 Put the disks online in Windows 2008 that have been freed from SVC . . . . . . . 9.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.4 Migrate the image mode VDisks to managed MDisks . . . . . . . . . . . . . . . . . . . . 9.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.6 Migrate the VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.4 Migrate the image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.6 Migrate the managed VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . 9.7.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Migrating AIX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.4 Migrate image mode VDisks to VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.6 Migrate the managed VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 Using VDisk Mirroring and Space-Efficient VDisk together. . . . . . . . . . . . . . . . . . . . 9.10.1 Zero detect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.2 VDisk Mirroring With SEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.3 Metro Mirror and SEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automated VDisk creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scripting alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Node replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replacing nodes nondisruptively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expanding an existing SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving VDisks to a new I/O group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replacing nodes disruptively (rezoning the SAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

687 687 688 689 698 700 702 705 709 710 712 714 715 719 722 725 727 728 732 733 735 738 741 744 746 747 750 752 753 758 760 762 765 766 769 770 770 772 778 783 784 785 788 795 797 798 802 804 805

Appendix C. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 807 SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
Contents

xix

6423TOC.fm

Draft Document for Review January 12, 2010 4:41 pm

Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance data collection and TPC-SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

808 808 808 808 810 813 813 813 814 814 814

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817

xx

SAN Volume Controller V5.1

Draft Document for Review December 21, 2009 4:55 pm

6423spec.fm

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Copyright IBM Corp. 2009. All rights reserved.

xxi

6423spec.fm

Draft Document for Review December 21, 2009 4:55 pm

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX developerWorks DS4000 DS6000 DS8000 Enterprise Storage Server FlashCopy GPFS IBM Systems Director Active Energy Manager IBM Power Systems Redbooks Redpaper Redbooks (logo) Solid System i System p System Storage System Storage DS System x Tivoli TotalStorage WebSphere XIV

The following terms are trademarks of other companies: Emulex, and the Emulex logo are trademarks or registered trademarks of Emulex Corporation. Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and other countries. QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered trademark in the United States. ACS, Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S. and other countries. VMotion, VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

xxii

SAN Volume Controller V5.1

Draft Document for Review December 21, 2009 4:56 pm

6423chang.fm

Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-6423-07 for SAN Volume Controller V5.1 as created or updated on December 21, 2009.

October 2009, Eighth Edition


This revision reflects the addition, deletion, or modification of new and changed information described below.

New information
Added iSCSI information Added Solid State Drive information

Changed information
Removed duplicate information Consolidated chapters Removed dated material

Copyright IBM Corp. 2009. All rights reserved.

xxiii

6423chang.fm

Draft Document for Review December 21, 2009 4:56 pm

xxiv

SAN Volume Controller V5.1

Draft Document for Review January 7, 2010 12:19 pm

6423pref.fm

Preface
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC), a virtualization appliance solution that maps virtualized volumes visible to hosts and applications to physical volumes on storage devices. Each server within the SAN has its own set of virtual storage addresses, which are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. This means that volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves management of information at the block level in a network, enabling applications and servers to share storage devices on a network.

The team who wrote this book


This book was produced by a team of specialists from around the world working at Brocade Communications, San Jose, and the International Technical Support Organization, San Jose Center. Pall Beck is a SAN Technical Team Lead in IBM Nordic. He has 12 years of experience working with storage and joined the IBM ITD DK in 2005. Prior to working for IBM in Denmark, he worked as an IBM CE performing hardware installations and repairs for IBM iSeries, pSeries, and zSeries in Iceland. As a SAN Technical Team Lead for ITD DK he was leading a team of administrators running some of the largest SAN installations in Europe. His current position involves the creation and implementation of operational standards and aligning best practices throughout the Nordics. Pall has a diploma as an Electronic Technician from Odense Tekniske Skole in Denmark and IR in Reykjavik, Iceland. Angelo Bernasconi is a Certified ITS Senior Storage and SAN Software Specialist in IBM Italy. He has 24 years of experience in the delivery of maintenance and professional services for IBM Enterprise customers in z/OS and open systems. He holds a degree in Electronics and his areas of expertise include storage hardware, Storage Area Network, storage virtualization, de-duplication, and disaster recovery solutions. He has written extensively about SAN and virtualization products in three IBM Redbooks, and is the Technical Leader of the Italian Open System Storage Professional Serices Community. Werner Eggli is a Senior IT Specialist with IBM Switzerland. He has more than 25 years experience in Software Development, Project Managment and Consulting concentrating in the Networking and Telecommunication Segment. Werner joined IBM in 2001 and works in pre-sales as a Storage SE for Open Systems. His expertise is the design and implementation of IBM Storage Solutions. He holds a degree in Dipl.Informatiker (FH) from Fachhochschule Konstanz, Germany. Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he worked in the IBM Technical Support Center, providing Level 2 and 3 support for IBM storage products. Jon has 24 years of experience in storage software and management, services, and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist. He is also the UK Chairman of the Storage Networking Industry Association. We extend our thanks to the following people for their contributions to this project.

Copyright IBM Corp. 2009. All rights reserved.

xxv

6423pref.fm

Draft Document for Review January 7, 2010 12:19 pm

There are many people that contributed to this book. In particular, we thank the development and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues along and ensuring that they maintained a high profile. In particular, we thank the previous authors of this redbook: Matt Amanat Angelo Bernasconi Steve Cody Sean Crawford Sameer Dhulekar Katja Gebuhr Deon George Amarnath Hiriyannappa Thorsten Hoss Juerg Hossli Philippe Jachimczyk Kamalakkannan J Jayaraman Dan Koeck Bent Lerager Craig McKenna Andy McManus Joao Marcos Leite Barry Mellish Suad Musovich Massimo Rosati Fred Scholten Robert Symons Marcus Thordal Xiao Peng Zhao We would also like to thank the following people for their contributions to previous editions and to those that contributed to this edition: John Agombar Alex Ainscow Trevor Boardman Chris Canto Peter Eccles Carlos Fuente Alex Howell Colin Jewell Paul Mason Paul Merrison Jon Parkes Steve Randle Lucy Raw Bill Scales Dave Sinclair Matt Smith Steve White Barry Whyte IBM Hursley Bill Wiegand IBM Advanced Technical Support

xxvi

SAN Volume Controller V5.1

Draft Document for Review January 7, 2010 12:19 pm

6423pref.fm

Dorothy Faurot IBM Raleigh Sharon Wang IBM Chicago Chris Saul IBM San Jose Sangam Racherla IBM ITSO A special mention must go to Brocade for their unparalleled support of this residency in terms of equipment and support in many areas throughout. Namely: Jim Baldyga Yong Choi Silviano Gaona Brian Steffler Steven Tong Brocade Communications Systems

Become a published author


Join us for a two- to six-week residency program! Help write a book dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You will have the opportunity to team with IBM technical professionals, Business Partners, and Clients. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an e-mail to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099

Preface

xxvii

6423pref.fm

Draft Document for Review January 7, 2010 12:19 pm

2455 South Road Poughkeepsie, NY 12601-5400

xxviii

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch01.Introduction Werner.fm

Chapter 1.

Introduction to storage virtualization


This chapter provides a definition of what storage virtualization is about. It gives a short overview on todays highest ranked storage issues and explains where and how storage virtualization might support you in solving these issues.

Copyright IBM Corp. 2009. All rights reserved.

6423ch01.Introduction Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

1.1 What is storage virtualization


First of all, storage virtualization is an over used term. More often than not it is used as a buzz word to claim a product is a virtualized. The truth is, that almost every storage hardware and software product could technically claim to provide some form of block level virtualization. So where do we stop and draw a line in the sand? Does the fact that a laptop has some logical volumes carved from a single physical drive mean it is virtual? Not really. So what is storage virtualization all about? IBMs official slant on storage virtualization is crisp and clear: Storage Virtualization is a technology that makes one set of resources look and feel like another set of resources, preferably with more desirable characteristics. It is a logical representation of resources not constrained by physical limitations. Hides some of the complexity Adds or integrates new function with existing services Can be nested or applied to multiple layers of a system When discussing storage virtualization it is important to understand that virtualization can be implemented on different layers in the I/O stack. We have to clearly distinguish between virtualization on the filesystem layer, and virtualization on the block, i.e disk layer. The focus of this book is block level virtualization, i.e the block aggregation layer. File system virtualization is out of the intended scope of this book. Readers interested in this topic should look towards IBMs General Parallel File System (GPFS), or IBMs Scale Out Filesystem (SOFS) which is based on GPFS. For more information and an overview of the IBM General Parallel File System (GPFS) Version 3, Release 2 for AIX and Linux and Windows go to: Thttp://www-03.ibm.com/systems/clusters/software/whitepapers/gpfs_intro.html For the IBM Scale Out Filesystem go to: http://www-935.ibm.com/services/us/its/html/sofs-landing.html A good overview on Storage Domain and its different layers is given by the Storage Networking Industry Associations (SNIA) block aggregation model (Figure 1-1). It shows the three different layers of a storage domain, i.e the file, the block aggregation and the block subsystem layer. The model splits the block aggregation layer into three sub-layers. Block aggregation can be realized within hosts (servers), in the storage network (storage routers and storage controllers), or in storage devices (intelligent disk arrays). IBMs implementation of a block aggregation solution is the IBM System Storage SAN Volume Controller. It is implemented as a clustered appliance in the storage network layer. A deeper look on how and why IBM has chosen to implement its IBM System Storage SAN Volume Controller in the storage network layer will be given in Chapter 2, IBM System Storage SAN Volume Controller overview on page 7.

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch01.Introduction Werner.fm

Figure 1-1 SNIA Block Aggregation Model

The key concept of virtualization is to de-couple the storage (as delivered by commodity two way RAID controllers attaching physical disk drives) from the storage functions that are expected from servers in a todays SAN environment. De-coupling is achieved by abstracting the physical location of data from the logical representation that an application on a server sees. The basic task of the virtualization engine is to present logical entities, i.e a volume, to the user and manage internally the process of mapping it to the actual physical location. How this mapping is realized depends on the specific implementation. Another implementation specific issue is the granularity of the mapping. It can range from a small fraction of a physical disk, up to the full capacity of a single physical disk. A single block of information in such an environment is identified by its logical unit identifier (LUN), i.e the physical disk, and an offset within that LUN - known as a Logical Block Address (LBA). Keep in mind, that the term physical disk used in this context describes a piece of storage that might have been carved out of a RAID array in the underlying disk subsystem. Mapping the address space is done between the logical entity, usually referred to as a virtual disk (VDisk) and the physical disks identified by its LUN. These LUNs provided by the storage controllers to the virtualization layer will be referred to as MDisk throughout this book. An overview on block level virtualization is given in Figure 1-2 on page 4.

Chapter 1. Introduction to storage virtualization

6423ch01.Introduction Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 1-2 Block level virtualization overview

So what does this mean? Well, this means that the server and application only know about logical entities and access these via a consistent interface provided by the virtualization layer. Each logical entity owns a common and well defined set of functionality independent of where the physical representation is located. The functionality of a VDisk presented to a server, such as expanding/reducing the size of a VDisk, mirror a VDisk to secondary site, create a FlashCopy/Snapshot, Thin Provisioning/Over-allocation and so on, is implemented in the virtualization layer and does not rely in any way on the functionality provided by the disk subsystems delivering the MDisks. Data stored in a virtualized environment is stored in a location independent way. This will allow a user to just move or migrate its data, or parts of it, to a different place or storage pool, i.e the place that the data really belongs to. The logical entity can be resized, moved, replaced, replicated, over-allocated, mirrored, migrated, and so on, without any disruption to the server and application. Once you have an abstraction layer in the SAN, you can do just about anything. When we think of block level storage virtualization, we see a system that must provide what we have chosen to call the Cornerstones of Virtualization. Quite simply, these are the set of core advantages a product such as the SVC can provide over traditional direct attached SAN storage namely: 1. Online volume migration while applications are running. This is possibly the killer app for storage virtualization. It enables you to put your data where it belongs, and, if the requirements are changing over time, to move it to the right place or storage pool without impacting your server or application. Implementation of a tiered storage environment providing different storage classes for information life-cycle management (ILM), balancing 4
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch01.Introduction Werner.fm

I/O across controllers, adding, upgrading and retiring storage, that is to say it allows you to put your data where it really belongs. 2. Simplification of storage management by providing a single image for multiple controllers and a consistent user interface for provisioning heterogeneous storage. (After some initial array setup of course). 3. Enterprise level copy services for existing storage. The customer can license a function once and use it everywhere. New storage can be purchased as low-cost RAID bricks. (The source and target of a copy relationship can be on different controllers). 4. Increased storage utilization by pooling storage across the SAN. 5. The potential to increase system performance by reducing hot spots, striping disks across many arrays and controllers - and in some implementations providing additional caching. The ability to deliver these functions in a homogeneous way on a scalable and highly available platform, over any attached storage and to every attached server are the key challenges for every block level virtualization solution.

1.2 User requirements that drive storage virtualization


In todays 'smarter planet' and 'dynamic infrastructure' you need a storage environment that can stand up and hold its own against application and server mobility. Business demands are changing quickly. The five key customer concerns driving storage virtualization are: 1. Growth in datacenter costs 2. Inability of IT organizations to respond quickly enough to business demands 3. Poor asset utilization 4. Poor availability or service levels 5. Lack of skilled staff for storage administration The importance of dealing with the complexity of managing storage networks is brought to light by the total-cost-of-ownership (TCO) metric applied to storage networks. The message we get from different industry analysis is quite clear: storage acquisition costs are only about 20 percent of the TCO. Most of the remaining costs are related, in one way or another, to managing the storage system. Managing lots of different boxes, using different interfaces how much can you manage as a single entity? Well, if you think about your un-virtualized storage today, every box is an island. Even if you have a big box monolith that claims to virtualize, that big box is an island and it will have to be replaced at some stage in the future. With the SVC you can seriously reduce the number of islands you need to manage ideally one, but if you are out of scope then you probably have many 10's or 100's of boxes today, and being able to reduce that to the 10's is a step in the right direction. The SVC provides a single interface for storage management. Of course there is an initial effort for the initial setup of the disk subsystems however all the day to day storage management can be performed on the SVC. As an example, if we think about data migration as disk subsystems are phased out, the data migration functionality of the SVC can be used. The data can be moved online and without any impact on your servers.

Chapter 1. Introduction to storage virtualization

6423ch01.Introduction Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Also advanced functions such as data mirroring or FlashCopy are provided by the virtualization layer meaning that there is no need to purchase them again for each new disk subsystem.Whereby Today, it is typical that open systems run at way less than 50% usable capacity the RAID disk subsystems provide. Doing the math using the installed raw capacity in the disk subsystems will, dependent on the RAID level used, show utilization numbers of less than 35%. A block level virtualization solution such as the SVC will support you to increase that to something like 75 or 80%. With the SVC there is no need to keep and manage freespace in every single disk subsystem. You do not need to worry so much as to whether there is sufficient freespace on the right storage tier, or in a single box. Even if there is enough freespace in one single box it might not be accessible in a non-virtualized environment for a specific server/application due to multipath driver issues. The SVC is able to handle the storage resources it manages as one single storage pool. Disk space allocation from this pool is a matter of minutes for every server connected to the SVC as you just provision the capacity as needed, without disrupting applications.

1.3 Conclusion
Storage virtualization is no longer a concept or even a bleeding-edge technology. All major storage vendors offer storage virtualization products. Making use of storage virtualization as the foundation for flexible and reliable storage solution will help a company better align business and IT by optimizing the storage infrastructure and management with business demands. The IBM System Storage SAN Volume Controller is a mature, fifth generation virtualization solution which uses open standards and is consistent with the SNIA (Storage Networking Industry Association) storage model. It is realized as an appliance based in-band block virtualization process, in which intelligence, including advanced storage functions, is migrated from individual storage devices to the storage network. We expect the use of SVC will improve the utilization of storage resources, simplify the storage management and last but not least will improve the availability of your applications.

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Chapter 2.

IBM System Storage SAN Volume Controller overview


This chapter describes the major concepts behind the IBM System Storage SAN Volume Controller. It not only covers the hardware architecture but also the software concepts. A brief history of the product will be given and the additional functionalities that will be available with the newest release will be introduced.

Copyright IBM Corp. 2009. All rights reserved.

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

2.1 History
IBMs implementation of block level storage virtualization, the IBM System Storage SAN Volume Controller (SVC), has its roots in an IBM project that was initiated in the second half of 1999 at the IBM Almaden Research Center. The project was called COMPASS (COMmodity PArts Storage System). One of its goals was to build a system almost exclusively built from off-the-shelf standard parts. As any enterprise level storage control system, it had to deliver high performance and availability, comparable to the highly optimized storage controllers of previous generations. The idea of building a storage control system based on a scalable cluster of lower-performance Pentium-based servers, instead of a monolithic architecture of two nodes, is still a compelling one. COMPASS had to also address a major challenge for the heterogeneous open systems environment, namely to reduce the complexity of managing storage on block devices. The first publications covering this project were released to the public in 2003 in the form of the IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003, The architecture of a SAN storage control system, by J. S. Glider, C. F. Fuente, W. J. Scales. It can be found at: http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066 919c/b97a551f7e510eff85256d660078a12e?OpenDocument The results of the COMPASS project defined the fundamentals for the product architecture. The announcement of the first release of the IBM System Storage SAN Volume Controller took place in July 2003. The following releases brought new more powerful hardware nodes which approximately doubled the I/O performance and throughput of its predecessors, new functionality and additional interoperability with new elements in host environments and disk subsystems and the SAN. Major steps in the products evolution were: SVC Release 2, February 2005. SVC Release 3. October 2005 With new 8F2 node hardware (based on IBM X336, 8GB cache, 4 * 2 Gb FC port) SVC Release 4.1, May 2006 With new 8F4 node hardware (based on IBM X336, 8GB cache, 4* 4Gb FC port) SVC Release 4.2. May 2007 New 8A4 entry level node hardware, (based on IBM X3250, 8GB cache, 4* 4Gb FC port) New 8G4 node hardware, (based on IBM X3550, 8GB cache, 4* 4Gb FC port) SVC Release 4.3, May 2008 In 2008 the 15,000th SVC engine was shipped by IBM. More than 5000 SVC systems worldwide are in operation. With the new release of SVC introduced in this book we will get a new generation of hardware nodes. This hardware that will approximately double the performance of its predecessors will also provide Solid State Disk (SSD) support. New software features are iSCSI support (which will be available on all hardware nodes that support the new firmware and multiple SVC partnerships which will support data replication between the members of a group of up to 4 SVC clusters.

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

2.2 Architectural overview


The IBM System Storage SAN Volume Controller is a SAN block aggregation appliance designed for attachment to a variety of host computer systems. There are three main approaches in use today to be considered for the implementation of block level aggregation: 1. Network Based - Appliance The device is a SAN appliance that sits in the data-path and all I/O flows through the device. This kind of implementation is also referred to as symmetric virtualization or in-band.The device is both target and initiator. It is the target of I/O requests from the host perspective and the initiator of I/O requests from the storage perspective. The redirection is performed by issuing new I/O requests to the storage. 2. Switch Based - Split-path The device is usually an intelligent SAN switch that intercepts I/O requests on the fabric and redirects the frames to the correct storage location. The actual I/O requests are themselves redirected. This kind of implementation is also referred to as asymmetric virtualization or out-of-band. Data and control datapath are separated and a specific (preferably high available and disaster tolerant) controller outside of the switch holds the meta-information/configuration to manage the split data paths. 3. Controller Based The device is a storage controller that provides an internal switch for external storage attachment. Here the storage controller intercepts and redirects I/O requests to the external storage as it would for internal storage. The three basic approaches are shown in Figure 2-1 on page 9:

Figure 2-1 Overview basic block level aggregation architectures

While all of these approaches provide in essence the same basic 'Cornerstones of Virtualization' there are some interesting side effects with some or all approaches.

Chapter 2. IBM System Storage SAN Volume Controller overview

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

All three can provide the required functionality, although, when it comes to the implementation (especially the switch based split I/O architecture) makes it more difficult to implement some of the required functionality. This is especially true for the FlashCopy services. Taking a point in time clone of a device in a split I/O architecture will mean that all the data has to be copied from the source to the target first. The drawback is that the target copy cannot be brought online until the entire copy has completed, i.e minutes or hours later. Think of using this for implementing a 'sparse flash, i.e a flash copy without background copy where the target disk is only populated with the blocks or 'extents' that are modified after the point in time the flash copy was taken, or an 'incremental' series of 'cascaded' copies. Scalability is anther issue here as it may be difficult to try to scale out to n-way clusters of intelligent line cards. A multi-way switch design is also very difficult to code and implement because of the issues in maintaining fast updates to meta-data to keep the meta-data synchronized across all processing blades - this has to be done at wire speed or you lose that claim. For the same reason, space-efficient copies and replication are also difficult to implement. Both synchronous and asynchronous replication require some level of buffering of I/O requests - while switches do have buffering built in, the number of additional buffers would be huge and grows as the link distance increases. Most of todays intelligent line cards do not provide anywhere near this level of local storage. The most common solution is to use an external box to provide the replication services, which means another box to manage and maintain which is in a different direction to the concepts of virtualization. Also to be kept in mind when choosing a split I/O architecture is the fact that your virtualization implementation will be locked to the actual switch type and hardware you are using. This will it make it very hard to implement any future changes. The controller based approach does well with respect to the functionality - but fails when it comes to scalability or upgradability. This is caused by the nature of its design as there is no true decoupling with this approach. This will become an issue when you have to life-cycle such a solution, i.e such a controller. You will be challenged with data migration issues and questions like how to reconnect the servers to the new controller, and how to do it online without any impact to your applications? Be aware that you will not only be replacing a controller in such a scenario, but also, implicitly, replacing your entire virtualization solution. You will not only have to replace your hardware but also to update/re-purchase the licences for the virtualization feature, advanced copy functions, and so on. With a network appliance solution based on a scale-out cluster architecture life-cycle management, tasks like adding or replacing new disk subsystems or migrating data between them are simplified to the extreme. Servers and applications remain online, data migration takes place transparently on the virtualization platform and licences for virtualization and copy services require no update, i.e cause no additional costs when disk subsystems have to be replaced. Only the network based appliance solution provides you an independent and scalable virtualization platform than can provide enterprise class copy services, is open for future interfaces and protocols, let you choose the disk subsystems that best fits your requirements, and last but not least, doesnt lock you in to specific SAN hardware. These are some of the reasons why IBM has chosen the network based appliance approach for the implementation of the IBM System Storage SAN Volume Controller.

10

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

The key characteristics of the SVC are: 1. Highly scalable - easy growth path to 2n nodes, i.e grow in a pair of nodes 2. SAN interface independent - actually supports Fibre Channel and iSCSI, but is also open for future enhancements such as Infiniband or others 3. Host independent - for fixed block based Open Systems environments 4. Storage (RAID controller) independent - ongoing plan to qualify additional types of RAID controller 5. Able to utilize commodity RAID controllers - so called Low Complexity RAID Bricks 6. Able to utilize node internal disk, i.e Solid State Disks On the SAN storage provided by the disk subsystems the SVC can offer the following services: 1. Create and manage a single pool of storage attached to the SAN 2. Block level virtualization, i.e Logical Unit virtualization 3. Provide advanced functions to the entire SAN, such as: Large scalable cache Advanced Copy Services such as: FlashCopy (Point in Time Copy) Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous) Data Migration

For future releases this feature list will grow. This additional layer could provide future features such as policy based space management mapping your storage resources based on desired performance characteristics, or dynamic re-allocation of entire VDisks or parts of it according to user definable performance policies. As mentioned before, as soon as you have the decoupling properly done, i.e installed an additional layer between server and the storage, everything is possible. SAN based storage infrastructures using SVC may be configured with two or more SVC nodes, arranged in a cluster. These would be attached to the SAN fabric, along with RAID controllers and host systems. The SAN fabric would be zoned to allow the SVC to see the RAID controllers, and for the hosts to see the SVC. The hosts would not usually be able to directly see or operate on the RAID controllers unless a split controller configuration is in use. The zoning capabilities of the SAN switch could be used to create these distinct zones. The assumptions made about the SAN fabric will be limited to make it possible to support a number of different SAN fabrics with minimum development effort. Anticipated SAN fabrics include Fibre Channel, iSCSI over Gigabit Ethernet and others might follow in the future. Figure 2-2 shows a conceptual diagram of a storage system utilizing the SVC. It shows a number of hosts connected to a SAN fabric or LAN. In practical implementations which have high availability requirements (the majority of the target customers for SVC) the SAN fabric cloud represents a redundant SAN comprising a fault tolerant arrangement of two or more counterpart SANs providing alternate paths for each SAN attached device. For iSCSI /LAN based access networks to the SVC both scenarios, i.e using a single network or using two physical separated networks, are supported. Redundant paths to VDisks can be provided for both scenarios.

Chapter 2. IBM System Storage SAN Volume Controller overview

11

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 2-2 SVC conceptual overview

A cluster of SVC nodes are connected to the same fabric and present VDisks to the hosts. These virtual disks are created from MDisks presented by the RAID controllers. There are two distinct zones shown in the fabric - a host zone, in which the hosts can see and address the SVC nodes, and a storage zone, in which the SVC nodes can see and address the MDisk/LUNs presented by the RAID controllers. Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens through the SVC nodes. This is commonly described as symmetric virtualization. Figure 2-3 on page 13 shows the SVC logical topology.

12

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Figure 2-3 SVC topology overview

For the sake of simplicity, Figure 2-3 shows only one SAN fabric and two types of zones. as already mentioned above, in a real environment the recommendation is to use two redundant SAN fabrics. The SVC can be connected to up to four fabrics. Zoning has to be done per host, per disk subsystem and per fabric. Zoning details can be found in 3.3.2, SAN zoning and SAN connections on page 74. For iSCSI based access using two separate networks and separating iSCSI traffic within the networks by using dedicated VLANs path for storage traffic, will prevent any IP interface, switch or target port failure from compromising the host servers access to the VDisks LUNs.

2.2.1 SVC Virtualization Concepts


The SVC product provides block level aggregation and volume management for disk storage within the SAN. In simpler terms, this means that SVC manages a number of back end storage controllers and maps the physical storage within those controllers into logical disk images that can be seen by application servers and workstations in the SAN. The SAN is zoned in such a way that the application servers cannot see the backend physical storage, and this prevents any possible conflict between SVC and the application servers both trying to manage the backend storage. The SVC is based on the following virtualization concepts which are discussed more throughout this chapter. A node is an SVC which provides virtualization, cache and copy services to the SAN. SVC nodes are deployed in pairs, to make up a cluster. A cluster may have between one and four SVC node pairs in it. This is not an architectural limit but a limit of the current product. Each pair of SVC nodes is also referred to as an I/O Group. An SVC cluster might have between one and up to four I/O Groups. Specific Virtual Disks or VDisk is always presented to a host server by one single I/O Group of the cluster.

Chapter 2. IBM System Storage SAN Volume Controller overview

13

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

When a host server performs I/O to one of its VDisks all the I/Os for a specific VDisk are directed to one specific I/O Group in the cluster. During normal operating conditions, the I/Os for a specific VDisk are always processed by the same node of the I/O Group. This node is referred to as the preferred node for this specific VDisk. Both nodes of an I/O Group act as preferred nodes each for its specific subset of the total number of VDisks the I/O Group is presenting to the host servers. But both nodes act also as failover node for their specific partner node in the I/O Group. They will takeover the I/O handling from its partner node if required. In an SVC based environment the I/O handling for a VDisk can switch between the two nodes of an I/O Group, therefore it is mandatory for servers connected via Fibre Channel, to use Multipath Drivers in order to be able to handle such failover situations. SVC 5.1 introduces iSCSI as an alternative means of attaching hosts. However, all communications with back-end storage subsystems, and with other SVC clusters, is still via Fibre Channel. The node failover can be handled without a multipath driver installed on the server. An iSCSI attached server may simply reconnect after a node failover to the original target IP address which is now presented by the partner node. To protect the server against link failures in the network or HBA failures, a multipath driver is mandatory. The SVC I/O Groups are connected to the SAN in such a way that all application servers accessing VDisks from this I/O Group have access to this group. Up to 256 host server objects can be defined per I/O Group, that is to say they can consume VDisks provided by this specific I/O Group. If required, host servers can be mapped to more than one I/O Group of an SVC cluster, that is to say accessing VDisks from different I/O groups. VDisks can be moved between I/O Groups to redistribute load between the I/O Groups. With the current release of SVC, I/Os to the VDisk being moved have to be quiesced for a short time for the duration of the move. The SVC cluster, its I/O Groups, view the storage presented to the SAN by the backend controllers as a number of disks, known as Managed Disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the backend controllers, an MDisk is usually, but not necessarily, provisioned from a RAID array. The application servers on the other hand do not see the MDisks at all. Instead, they see a number of logical disks, known as Virtual Disks or VDisks, which are presented by the SVC I/O Groups via the SAN (FC) or LAN (iSCSI) to the servers. A VDisk is storage, provisioned out of one, or if its a mirrored VDisk, of two Managed Disk Groups or MDGs. An MDG is a collection of up to 128 MDisks creating the storage pools that VDisks are provisioned out of. A single cluster can manage up to 128 MDGs. The size of these pools can be changed at runtime (expanded/shrunk), without any need for taking the MDG or the VDisks provided by it offline. At any point in time an MDisk can only be a member in one MDG with one exception (image mode VDisk) which will be explained later in this chapter. MDisks used in a specific MDG should have the following characteristics: They should bear the same hardware characteristics, for example RAID type, RAID array size, disk type, disk RPM. Be aware, its always the weakest element (MDisk) in a chain that defines the maximum strength of that chain (MDG). The disk subsystems providing the MDisks should have similar characteristics for example maximum IOPS, response time, cache, and throughput. Use an MDisk of the same size (recommended), use an MDisk that provides the same number of extents (recommended) and this needs to be kept in mind when you are adding MDisks to an existing MDG. If that is not feasible, check the distribution of the VDisks extents in that MDG.

14

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

For further details, refer to the Redbook SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521 and it can be found here: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open VDisks might be mapped to a host in order to allow access for a specific server to a set of VDisks. A host within the SVC is a collection of HBA WWPNs or iSCSI names (IQN), defined on the specific server. Note that iSCSI names are internally identified by fake WWPNs, i.e WWPNs generated by the SVC itself. VDisks might be mapped to multiple hosts, e.g VDisk accessed by multiple hosts of a server cluster. How these different entities belong together is shown in Figure 2-4.

Figure 2-4 SVC I/O Group Overview

An MDisk can be provided by a SAN disk subsystem or by the solid state disks (SSD) provided by the SVC nodes themselves. Each MDisk is divided up into a number of extents. The size of the extent will be selected by the user at creation time of an MDG. It ranges from 16MB (default) up to 2GB. The recommendation is to use the same extent size for all MDGs in a cluster, as this is a prerequisite for supporting VDisk migration between two MDGs. If the extent size doesnt fit you have to use VDisk mirroring (see 2.2.7, Mirrored VDisk on page 21) as a workaround. For just copying (not migrate) the data into another MDG to a new VDisk, SVC Advanced Copy Services can be used. The two most popular ways of how VDisks can be provisioned out of an MDG is shown in Figure 2-5. Striped Mode is the recommended one for most cases. Sequential extent allocation mode may slightly increase sequential performance for some workloads.

Chapter 2. IBM System Storage SAN Volume Controller overview

15

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 2-5 Overview Managed Disk Group

Extents for a VDisk can be allocated in many different ways. The process is under full user control at VDisk creation time and can be changed at any time by migrating single extents of a VDisk to another MDisk within the MDG. Details of how to create VDisks and migrate extents via GUI or CLI can be found in Chapter 7, SVC operations using the CLI on page 337, Chapter 8, SVC operations using the GUI on page 469, and Chapter 9, Data migration on page 675. SVC limits the number of extents in a cluster. The number is currently 222 ~= 4 million extents and this number may change in future releases. Because the number of addressable extents is limited, the total capacity an SVC cluster depends on the extent size chosen by the user. Assuming all defined MDGs have been created with the same extent size we get the capacity numbers specified in Table 2-1 for an SVC cluster.
Table 2-1 zExtent Size to Addressability matrix Extent Size Maximum 16MB 32MB 64 MB 128MB Cluster Capacity 64TB 128TB 256TB 512TB Extent Size Maximum 256MB 512MB 1024MB 2048MB Cluster Capacity 1PB 2PB 4PB 8PB

Most of today clusters will be fine with 1 - 2 PB capacity. We therefore recommend to use 256MB or, for larger clusters, 512 MB as the standard extent size.

2.2.2 MDisk Overview


The maximum size of an MDisk is 2 TB. An SVC cluster supports up to 4096 MDisks. At any point of time an MDisk is in one of the following three modes: 16
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

1. Unmanaged MDisk An MDisk is reported as Unmanaged when it is not a member of any Managed Disk Group. An Unmanaged MDisk is not associated with any VDisks and has no meta data stored on it. SVC will not write to an MDisk which is in Unmanaged Mode except when it attempts to change the mode of the MDisk to one of the other modes. SVC can see the resource, but it is not assigned to a pool, i.e an MDG. 2. Managed MDisk Managed Mode MDisks are always members of an MDG and contribute extents to the pool of extents available in the Managed Disk Group. Zero or more VDisks (if not operated in image mode, see below) may use these extents. MDisks operating in Managed mode may have meta data extents allocated from them and may be used as Quorum disks. 3. Image Mode MDisk Image Mode provides a direct block-for-block translation from the MDisk to the Virtual Disk with using virtualization. This mode is provided to satisfy three main usage scenarios: a. To allow virtualization of Managed Disks which already contain data which was written directly, not through an SVC. It allows a customer to insert the SVC into the data path of an existing storage configuration with minimal downtime. Details of the Data migration process is given in Chapter 9, Data migration on page 675. b. To allow a virtual disk managed by the SVC to be used with the copy services provided by the underlying RAID controller. In order to avoid loss of data integrity when the SVC is used in this way it is important that the SVC cache be disabled for the VDisk. c. SVC provides the ability to migrate to image mode. This allows the SVC to export VDisks and access them directly without SVC from the server. An Image Mode MDisk is associated with exactly one VDisk. The last extent is partial if the (image mode) MDisk is not a multiple of the MDisk Groups extent size (see Figure 2-6 on page 18). An image mode VDisk is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and will not have any SVC meta data extents allocated on it. Managed or image mode MDisk are always members of a MDG.

Chapter 2. IBM System Storage SAN Volume Controller overview

17

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 2-6 Image Mode MDisk Overview

It is a best practice if you work with image mode MDisks to put them in a dedicated MDG and use a special naming for it (Example: MDG_IMG_XXX). And bear in mind that the extent size chosen for this specific MDG has to fit the one you plan to migrate the data in. All of SVC copy services can be applied to Image Mode disks.

2.2.3 VDisk Overview


The maximum size of an VDisk is 256 TB. An SVC cluster supports up to 4096 VDisks. The following services are supported on a VDisk: VDisk can be created, deleted. The size of a VDisk can be changed (expand/shrink). VDisks can be migrated (full or partially) at runtime to another MDisk or storage pool (MDG). VDisks can be created as fully allocated or Space-Efficient VDisks. A conversion from a fully allocated to a Space-Efficient VDisk and vice versa can be done at runtime. VDisks can be stored in MDGs (mirrored) to make them resistant against disk subsystem failures or improve the read performance. VDisks can be mirrored synchronously for distances up to 100km or asynchronously for longer distances. An SVC cluster can run active data mirrors to a maximum of three other SVC clusters. VDisks can be FlashCopyd. Multiple snapshots and quick restore from snapshots (reverse flash copy) is supported. VDisks have two modes which are Image Mode and Managed Mode. The following state diagram in Figure 2-7 on page 19 shows the state transitions. 18

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

create managed mode vdisk

Doesn't exist
delete vdisk

Managed mode

create image mode vdisk

delete vdisk

migrate to image mode

complete migrate

Image mode
migrate to image mode

Managed mode migrating

Figure 2-7 VDisk state transitions

Managed Mode VDisks have two policies which are sequential policy and striped policy. Policies define how extents of a VDisk are carved out of an MDG.

2.2.4 Image Mode VDisk


Image mode provides a one to one mapping between the LBAs between a VDisk and an MDisk. Image mode virtual disks have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is mapped to one and only one image mode VDisk. The VDisk capacity specified must be less than or equal to the size of the image mode MDisk. When you create an image mode VDisk, the MDisk specified must be in unmanaged mode, and must not be a member of an MDG. The MDisk is made a member of the specified MDG (MDG_IMG_XXX) as a result of the creation of the image mode VDisk. The SVC also supports the reverse process, that is to say managed mode VDisk can be migrated to image mode VDisks. If a VDisk is migrated to another MDisk, it is represented as being in Managed Mode during the migration and only represented as an Image mode VDisk once it has reached the state where it is a straight through mapping.

2.2.5 Managed Mode VDisk


VDisks operating in Managed Mode provide a full set of virtualization functions. Within an MDG, SVC supports an arbitrary relationship between extents on (Managed Mode) VDisks and extents on MDisks. Subject to the constraints that each MDisk extent is contained in, at most, one VDisk, each VDisk extent maps to exactly one MDisk extent. Figure 2-8 on page 20 represents this diagrammatically. It shows Virtual Disk V which is made up of a number of extents. Each of these extents is mapped to an extent on one of the Managed Disks - A,B or C. The mapping table stores the details of this indirection. Note that some of the Managed Disk extents are unused; That is to say there is no VDisk extent which

Chapter 2. IBM System Storage SAN Volume Controller overview

19

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

maps to them. These unused extents are available for use in creating new VDisks, migration, expansion and so on.

Figure 2-8 Simple view of Block virtualization

A Managed Mode VDisk can have a size of zero blocks, in which case it occupies zero extents. Such a VDisk cannot be mapped to host or take part in any Advanced Copy Services functions. The allocation of a specific number of extents from a specific set of Managed Disks is performed by the following algorithm: where the set of MDisks to allocate extents from contains more than one disk, extents are allocated from MDisks in a round robin fashion. If a MDisk has no free extents when its turn arrives then its turn is missed and the round robin moves to the next MDisk in the set which has a free extent. Beginning with SVC 5.1, when creating a new VDisk the first MDisk to allocate an extent from is chosen in a pseudo random way rather than simply choosing the next disk in a round robin fashion. The pseudo random algorithm avoids the situation whereby the striping effect inherent in a round robin algorithm places the first extent for a large number of VDisks on the same MDisk. Placing the first extent of a number of VDisks on the same MDisk could lead to poor performance for workloads which place a large I/O load on the first extent of each VDisk or which create multiple sequential streams.

2.2.6 Cache Mode and Cache Disabled VDisks


Prior to SVC V3.1, enabling any copy services function in a RAID array controller for a LUN that was being virtualized by SVC was not supported because the behavior of the write-back cache in the SVC would have led to data corruption. With the advent of cache disabled VDisks, it becomes possible to enable copy services in the underlying RAID array controller for LUNs that are virtualized by the SVC.

20

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Wherever possible, we recommend using SVC copy services in preference to the underlying controller copy services.

2.2.7 Mirrored VDisk


Starting with SVC 4.3 the Mirrored VDisk feature provides a simple RAID1 function which allows a VDisk to remain accessible even when an MDisk on which it depends has become inaccessible. This is achieved using two copies of the VDisk typically allocated from different MDisk Groups or using image-mode copies. The VDisk is the entity which participates in FlashCopy and a Remote Copy relationship, is served by an I/O Group and has a preferred node. The copy now has the virtualization attributes such as MDisk Group and policy (striped, sequential or image). A copy is not a separate object and cannot be created or manipulated except in the context of the VDisk. Copies are identified via the configuration interface with a copy ID of their parent VDisk. This copy ID can be either 0 or 1. Depending on the configuration history, a single copy may have an ID of either 0 or 1. The feature does provide a point in time copy functionality achieved by splitting a copy from the VDisk. The feature does not address other forms of mirroring based on Remote Copy (sometimes called Hyperswap) which mirrors VDisks across I/O Groups or clusters, nor is it intended to manage mirroring or remote copy functions in backend controllers. Figure 2-9 gives an Overview on VDisk Mirroring.

Figure 2-9 VDisk Mirroring Overview

A copy may be added to a VDisk with only one copy, or removed from a VDisk with two. Checks will prevent accidental removal the sole copy of a VDisk. A newly-created, unformatted VDisk with two copies will initially have the copies out-of-synchronization. The primary copy will be defined as fresh and the secondary copy as stale. The synchronization process will update the secondary copy until it is synchronized. This will be
Chapter 2. IBM System Storage SAN Volume Controller overview

21

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

done at the default synchronization rate or one defined when creating the VDisk or subsequently modifying it. If a two-copy Mirrored VDisk is created with the format parameter, both copies are formatted in parallel and the VDisk comes online when both operations are complete with the copies in-sync. If Mirrored VDisks get expanded or shrunk all of their copies also get expanded or shrunk. If it is known that MDisk space which will be used for creating copies is already formatted, or if the user does not require read stability, a no synchronization option can be selected which declares the copies as synchronized (even when they are not) The time for a copy which has become un-synchronized to re-synchronize is minimized by copying only those 256KB grains which have been written to since synchronization was lost. This is known a incremental synchronization. Only those changed grains need be copied to restore synchronization. Important: An unmirrored VDisk may be migrated from a source to a destination by adding a copy at the desired destination, waiting for the two copies to synchronize, and then removing the original copy. This operation may be stopped at any time.The two copies can be in different MDisk Groups with different extent sizes. Where there are two copies of a VDisk, one is known as the primary copy. If the primary is available and synchronized, reads from the VDisk are directed to it. The user may select the primary when creating the VDisk or may change it later. Selecting the copy allocated on the higher-performance controller will maximize the read performance of the VDisk. The write performance will be constrained by the lower-performance controller because writes must complete to both copies before the VDisk is considered to have been successfully written. This has to be kept in mind when VDisk Mirroring created with one copy in an SSD MDG and and the second one in an MDG populated with resources from a disk subsystem. Note: SVC doesnt prevent you from creating the two copies in one or more SSD MDGs of the same node. Though doing so means that you lose redundancy and could therefore be faced with access loss to your VDisk if the node fails or restarts. A VDisk with copies may be checked to see whether all copies are identical. If a medium error is encountered whilst reading from any copy, it will be repaired using data from another fresh copy. This process may be asynchronous but will give up if the copy with the error goes offline. Mirrored VDisks consume bitmap space at a rate of 1 bit per 256KB grain. This translates to 1 MB of bitmap space supporting 2 TB-worth of Mirrored VDisk. The default allocation of bitmap space in 20 MB which supports 40 TB of Mirrored VDisk. If all 512 MB of variable bitmap space is allocated to Mirrored VDisks, 1 PB of Mirrored VDisks can be supported. The advent of the Mirrored VDisk feature will inevitably lead customers to think about two-site solutions for cluster and VDisk availability. Generally the advice is not to split a cluster, i.e the single I/O Groups, across sites. But there are some configurations that will be effective. Special care has to be taken into account to prevent a situation than is referred to as a split brain scenario (caused, for example, by a power glitch on the SAN switches; the SVC nodes are protected by their own UPS) where the connectivity between components will be lost and a contest for the SVC cluster quorum disk occurs. Which set of nodes wins is effectively arbitrary. If the set of nodes which won the 22
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

quorum disk then experiences a permanent power loss, the cluster is lost. The way to prevent this is to use a configuration that will provide effective redundancy because of the exact placement of system components in fault domains. The details of such a configuration and the required prerequisites can be found in Chapter 3, Planning and configuration on page 63.

2.2.8 Space-Efficient VDisks


Starting with SVC 4.3, VDisks can be configured to either be Space-Efficient or Fully Allocated. A Space-Efficient VDisk (SE VDisk) will behave with respect to application reads and writes as if they were fully allocated including the requirements of Read Stability and Write Atomicity. When an SE VDisk is created, the user will specify two capacities the real capacity of the VDisk and its virtual capacity. The real capacity will determine the quantity of MDisk Extents that will be allocated for the VDisk. The virtual capacity will be the capacity of the VDisk reported to other SVC components (e.g. FlashCopy, Cache, Remote Copy) and to the host servers. The real capacity will be used to store both the user data and the meta data for the SE VDisk. The real capacity can be specified as an absolute value or a percentage of the virtual capacity. The Space-Efficient VDisk feature can be used on its own to create over-allocated or late-allocation VDisks, or can be used in conjunction with FlashCopy to implement Space-Efficient FlashCopy. SE VDisk can be used in conjunction with the Mirrored VDisks feature as well. We will refer to this as Space-Efficient Copies of VDisks. When an SE VDisk is initially created a small amount of the real capacity will be used for initial meta data. Write I/Os to grains of the SE VDisk that have not previously been written to will cause grains of the real capacity to be used to store metadata and user data. Write I/Os to grains that have previously been written to will update the grain where data was previously written. The grain is defined when the VDisk is created and can be 32KB, 64 KB, 128 KB or 256 KB. An overview is given in Figure 2-10.

Chapter 2. IBM System Storage SAN Volume Controller overview

23

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 2-10 Overview SE VDisk

SE VDisks store both user data and meta data. Each grain requires meta data. The overhead will never be > 0.1% of the user data. The overhead is independent of the virtual capacity of the SE VDisk. If you are using SE VDisks in a FlashCopy map, use the same grain size as the map grain size for best performance. If you are using the Space-Efficient VDisk directly with a host system, use a small grain size. Note: SE VDisks need no formatting. A read I/O which requests data from unallocated data space will return zeroes. When a write I/O causes space to be allocated the grain will be zeroed prior to use. Consequently, a Space-Efficient VDisk will always be formatted regardless of whether the format flag is specified when the VDisk is created.The formatting flag will be ignored when a Space-Efficient VDisk is created or the real capacity is expanded the virtualization component will never format the real capacity for a Space-Efficient VDisk. The real capacity of an SE VDisk can be changed provided that the VDisk is not in Image Mode. Increasing the real capacity allows a larger amount of data and metadata to be stored on the VDisk. SE VDisks use the real capacity of a VDisk in ascending order as new data is written to the VDisk. Consequently if the user initially assigns too much real capacity to a SE VDisk the real capacity can be reduced to free up storage for other uses. It is not possible to reduce the real capacity of a SE VDisk to be less than the capacity that is currently in use other than by deleting the VDisk. An SE VDisk can be configured to autoexpand. This causes SVC to automatically expand the real capacity of an SE VDisk as its real capacity is used. Autoexpand attempts to maintain a fixed amount of unused real capacity on the VDisk. This amount is known as the contingency capacity.

24

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

The contingency capacity is initially set to the real capacity assigned when the VDisk is created. If the user modifies the real capacity then the contingency capacity is reset to be the difference between the used capacity and real capacity. If a VDisk is created with a zero contingency capacity will go offline as soon as they need to expand whereas VDisks with a non-zero contingency capacity will stay online until it has been used up. Autoexpand will not cause space to be assigned to the VDisk that can never be used. In practice this means autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The real capacity may be manually expanded to more than the maximum required by the current virtual capacity and the contingency capacity will be recalculated as described previously. To support the auto-expanding of Space-Efficient VDisks, the MDisk Groups from which they are allocated have a configurable warning capacity. When the used free capacity of the group exceeds the warning capacity, a warning is logged. To allow for capacity used by quorum disks and partial extents of image-mode vdisks, the calculation uses the free capacity. For example, if a warning of 80% has been specified, the warning will be logged when 20% of the free capacity remains. Note: Space-Efficient VDisks require additional I/O operations to read and write metadata to backend storage and generate additional load on the SVC nodes. We therefore do not recommend the use of SE VDisks for high performance applications. An SE VDisk can be converted to a fully allocated VDisk using VDisk Mirroring. SVC 5.1.0 introduces the ability to convert a fully-allocated VDisk to a SE VDisk, by the following procedure: 1. Start with a VDisk having one fully-allocated copy. 2. Add a Space-Efficient copy to the VDisk. 3. Allow VDisk Mirroring to synchronize the copies. 4. Remove the fully-allocated copy. This is achieved by the use of a zero-detection algorithm. Note that as of 5.1.0, this algorithm is used only for I/O generated by synchronization of Mirrored VDisks; I/O from other components (e.g. FlashCopy) is written as normal. Note: Consider Space-Efficient VDisk as targets in Flash Copy relationships. Using them as a target in Metro Mirror or Global Mirror relationships makes no sense, because during the initial synchronization the target will become fully allocated.

2.2.9 VDisk I/O governing


It is possible to constrain I/O operations such that a system is constrained to the amount of I/O it can do to a VDisk in a period of time. This can be used to satisfy some quality of service constraint, or some contractual obligation (e.g. a customer agrees to pay for I/Os performed,

Chapter 2. IBM System Storage SAN Volume Controller overview

25

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

but will not pay for I/Os beyond a certain rate). Only commands that access the medium (Read (6/10), Write (6/10), Write and Verify) are subject to I/O governing. Note: I/O governing is applied to remote copy secondaries as well as primaries. This means that if an I/O governing rate has been set on a VDisk which is a remote copy secondary then this governing rate will also be applied to the primary. If governing is in use on both the primary and the secondary VDisks, each governed quantity will be limited to the lower of the two values specified. Governing has no effect on FlashCopy or data migration I/O. An I/O budget is expressed as a number of I/Os, or a number of MBs, over a minute. The budget is evenly divided between all SVC nodes that service that VDisk, i.e. between the nodes which form the I/O Group of which that VDisk is a member. The algorithm operates two levels of policing. While a VDisk on each SVC node has been receiving I/O at a rate below the governed level, then no governing is performed. A check is made every minute that the VDisk on each node is continuing to receive I/O at below the threshold level. Where this check shows that the host has exceeded its limit on one or more nodes, then policing begins for new I/Os. While policing is in force: A budget allowance is calculated for a 1 second period. I/Os are counted over a period of a second. If I/Os are received in excess of the one second budget on any node in the I/O Group, then those and later I/Os are pended. When the second expires, a new budget is established, and any pended I/Os are re-driven under the new budget. This algorithm may cause I/O to backlog in the front end, which might eventually cause Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a host stays within its 1 second budget on all nodes in the I/O Group for a period of 1 minute, then the policing is relaxed, and monitoring takes place over the 1 minute period as before.

2.2.10 iSCSI Overview


SVC 4.3.1 and earlier support Fibre Channel as the sole transport protocol for communicating with hosts, storage and other SVC clusters. SVC 5.1.0 introduces iSCSI as an alternative means of attaching hosts. However, all communications with back-end storage subsystems, and with other SVC clusters, is still via Fibre Channel. Note: The new iSCSI feature is a software feature provided by the new SVC 5.1 code. This feature will be available on any SVC hardware node that supports SVC 5.1 code, i.e is not restricted to the new 2145-CF8 nodes. In the simplest terms iSCSI allows the transport of SCSI commands and data over a TCP/IP network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets, and thereby leverages an existing IP network, instead of requiring expensive Fibre-Channel HBAs and a SAN fabric infrastructure. A pure SCSI architecture is based on the client/server model. A client (for example, server or workstation) initiates read or write requests for data from a target server (for example, a data storage system). Commands which are sent by the client and processed by the server are put

26

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

into the Command Descriptor Block (CDB). The server executes a command and completion is indicated by a special signal alert. Encapsulation and reliable delivery of CDB transactions between initiators and targets through the TCP/IP network, especially over a potentially unreliable IP network, is the main function of iSCSI. The concepts of names and addresses have been carefully separated in iSCSI: An iSCSI Name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stay constant for the life of the node. The terms "initiator name" and "target name" also refer to an iSCSI name. An iSCSI Address specifies not only the iSCSI name of an iSCSI node, but also a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI Name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned via DHCP. An SVC node represents an iSCSI node and provides statically allocated IP addresses. Each iSCSI node, i.e. an Initiator or Target, has a unique iSCSI Qualified Name (IQN), that can have a size of up to 255 bytes. The IQN is formed according to the rules adopted for Internet nodes. The iSCSI qualified name format is defined in RFC3720 and contains (in order): The string "iqn." A date code specifying the year and month in which the organization registered the domain or sub-domain name used as the naming authority string. The organizational naming authority string, which consists of a valid, reversed domain or subdomain name. Optionally, a ':', followed by a string of the assigning organization's choosing, which must make each assigned iSCSI name unique. For SVC the IQN for its iSCSI Target is specified as: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> On a Windows server the IQN, i.e the name for the iSCSI Initiator may be defined as: iqn.1991-05.com.microsoft:<computer name> IQNs may be abbreviated by a descriptive name, known as an alias. An alias can be assigned to an initiator or target. The alias is independent of the name, and does not have to be unique. Since it is not unique, the alias must be used in a purely informational way. It may not be used to specify a target at login, or used during authentication. Both targets and initiators may have aliases. An iSCSI name provides the correct identification of an iSCSI device irrespective of its physical location. Remember, the IQN is an identifier, not an address. Note: Before changing cluster or nodenames for an SVC cluster that has servers connected to it via SCSI, be aware that because the cluster and nodename are part of the SVCs IQN you could lose access to your data by changing these names. The SVC GUI will display a specific warning, the CLI does not. The iSCSI session consists of a Login Phase and a Full Feature Phase which is completed with a special command.

Chapter 2. IBM System Storage SAN Volume Controller overview

27

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

The Login Phase of the iSCSI is identical to the Fibre Channel Port Login process (PLOGI). It is used to adjust various parameters between two network entities and confirm the access rights of an Initiator. If the iSCSI Login Phase is completed successfully the target confirms the login for the Initiator; otherwise, the login is not confirmed and the TCP connection breaks. As soon as the login is confirmed the iSCSI session enter Full Feature Phase. If more than one TCP connection was established, iSCSI requires that each command/response pair goes through one TCP connection. Thus, each separate read or write command will be carried out without the necessity to trace each request for passing different flows. However, different transactions can be delivered through different TCP connections within one session. An overview of the different block-level storage protocols and where the iSCSI layer is positioned is shown in Figure 2-11 on page 28.

Figure 2-11 Overview of block level protocol stacks

An introduction to standards and definitions used for iSCSI, including the relevant IETF/RFCs can be found at: http://en.wikipedia.org/wiki/ISCSI/

2.2.11 Usage of IP Addresses and Ethernet ports


The addition of iSCSI brings some changes as to how ethernet access to an SVC cluster is configured. SVC 5.1 releases of the GUI and the CLI will represent these changes. The existing SVC node hardware has two ethernet ports and until now only one has been used for cluster configuration. With the introduction of iSCSI, a second port may now also be used. The configuration details of the two ethernet ports can be displayed by the GUI or CLI but will also be displayed on the nodes panel. There are now two kinds of IP addresses: A cluster management IP address is used for access to the SVC CLI, as well as the CIMOM which runs on the SVC configuration node. As before, only a single configuration node presents a cluster management IP address at any one time, and failover of the configuration node is unchanged. However, there may now be two cluster management IP addresses, one for each of the two ethernet ports.

28

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

A port IP address is used to perform iSCSI I/O to the cluster. Each node may have a port IP address for each of its ports. In the case of an upgrade to the SVC 5.1 code, the original cluster IP address will be retained and will always be found on the eth0 interface on the configuration node. A second, new cluster IP address may be optionally configured in SVC 5.1. This will always be on the eth1 interface on the configuration node. When the configuration node fails, both configuration IP addresses will move to the new configuration node. An overview of the new IP addresses on an SVC node port and the rules how these IP addresses are moved between the nodes of an I/O Group is given in Figure 2-12 on page 29. The management IP addresses and the ISCSI target IP addresses will failover to the partner node N2 if node N1 restarts (and vice versa). The ISCSI target IPs will failback to their corresponding ports on node N1 when node N1 is up again.

Figure 2-12 SVC 5.1 IP address overview

In an SVC cluster running 5.1 code, an eight node cluster with full iSCSI coverage (maximum configuration) would therefore have the following number of IP addresses: Two IPV4 configuration addresses (one is always associated with the eth0:0 alias for the eth0 interface of the configuration node, and the other goes with eth1:0). One IPV4 service mode fixed address (although many DCHP addresses could also be used) (this is always associated with the eth0:0 alias for the eth0 interface of the configuration node). Two IPV6 configuration address (one is always associated with the eth0:0 alias for the eth0 interface of the configuration node, and the other goes with eth1:0). One IPV6 service mode fixed address (although many DCHP addresses could also be used) (this is always associated with the eth0:0 alias for the eth0 interface of the configuration node).

Chapter 2. IBM System Storage SAN Volume Controller overview

29

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Sixteen IPV4 addresses used for iSCSI access to each node (these are associated with the eth0:1 or eth1:1 alias for the eth0 or eth1 interface on each node) Sixteen IPV6 addresses used for iSCSI access to each node (these are associated with eth0 and eth1 interface on each node.) The configuration of the SVC ports will be shown in great detail in Chapter 7, SVC operations using the CLI on page 337 and Chapter 8, SVC operations using the GUI on page 469.

2.2.12 iSCSI VDisk Discovery


The iSCSI target implementation on the SVC nodes makes use of the hardware offload features provided by the node's hardware. The result of this is minimal impact on the nodes CPU load for handling iSCSI traffic and simultaneously delivers excellent throughput (up to 95 MB/s user data) on each of the two 1 Gb/s LAN ports. Jumbo Frames (i.e MTU sizes >1500Bytes) will be supported in future SVC releases. Hosts may discover VDisks through one of the following three mechanisms: iSNS (Internet Storage Name Service) SVC can register itself with an iSNS name server; the IP address of this server is set using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets. SLP (Service Location Protocol) The SVC node runs an SLP daemon which responds to host requests. This daemon reports the available services on the node. One such service is the CIMOM which runs on the configuration node; iSCSI I/O service can now also be reported. iSCSI Send Target request. The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260)

2.2.13 iSCSI Authentication


Authentication of the host sever towards the SVC cluster is optional and is disabled by default. The user can choose to enable Challenge-Handshake Authentication Protocol (CHAP) authentication. This involves the sharing of a CHAP secret between the SVC cluster and the host. After the successful completion of the link establishment phase, the SVC as authenticator sends a challenge message to the specific server (peer). The server, which responds with a value calculated using a one-way hash function on the Index/Secret/Challenge, such as an MD5 checksum hash. The response is checked by the SVC against its own calculation of the expected hash value. If there is a match, the SVC acknowledges the authentication. If not, the SVC will terminate the connection and will not allow any I/O to VDisks. At random intervals the SVC may send new challenges to the peer to re-check the authentication. Details of how CHAP works can be found at: http://en.wikipedia.org/wiki/Challenge-handshake_authentication_protocol Each SVC host object can be assigned a CHAP secret. The host must then use CHAP authentication in order to begin a communications session with a node in the cluster. The cluster can also be assigned a CHAP secret if tw- way authentication is required. While creating an iSCSI host within an SVC cluster you will get the Initiators IQN, an example for a Windows Server looks like this:

30

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

iqn.1991-05.com.microsoft:ITSO_W2008) and in addition an (optional) CHAP secret has to be specified. Adding a VDisk(s) to a host, i.e LUN masking, is done in the same way as for hosts connected via Fibre Channel to the SVC. As iSCSI can be used in networks where data can be accessed illegally, the specification allows for different security methods. This can be, for example, be done via a method such as IPSec which, because it is implemented at the IP level is transparent for higher levels such as iSCSI. Details on securing iSCSI can be found in RFC3723 Securing Block Storage Protocols over IP which is available at: http://tools.ietf.org/html/rfc3723

2.2.14 iSCSI Multipathing


Multipathing drivers mean that the host can send commands down multiple paths to the SVC to the same VDisk. There is a fundamental difference between FC and iSCSI environments concerning multipathing. If Fibre Channel attached hosts see their Fibre Channel target, and VDisks go offline, for example due to a problem in the target node, its ports or the network, the host has to use a different SAN path to continue I/O. A multipathing driver is therefore always required in the host. SCSI attached hosts see a pause in I/O when a (target) node is reset, but, and this is the key difference, the host will get reconnected to the same IP target that reappears after a short period of time and its VDisks continue to be available for I/O. Note: Be aware, with the iSCSI implementation in SVC an IP address failover/failback between partner nodes of an I/O Group will only take place in cases of a planned or unplanned node restart. In case of a problem on the network link (switches, ports, links) no such failover will take place. A host multipathing driver for iSCSI is required if you want: To protect a server form network link failures. To protect a server form network failures, if the server is connect via 2 HBAs to two separate networks. To protect a server from a server HBA failure (if two HBA are in use). To provide load balancing on the servers HBA and the network links.

2.2.15 Advanced Copy Services overview


The IBM System Storage SAN Volume Controller supports the following copy services: Synchronous Remote Copy Asynchronous Remote Copy FlashCopy with a full target Block virtualization and Data migration

Chapter 2. IBM System Storage SAN Volume Controller overview

31

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Copy services are implemented between VDisks within a single SVC or multiple SVC clusters respectively. They are therefore independent of the functionalities of the underlying disk subsystems used to provide storage resources to an SVC cluster.

Synchronous/Asynchronous Remote Copy


The general application of remote copy seeks to maintain two copies of a data set. Often the two copies will be separated by some distance, hence the term 'remote', but this is not required. The remote copy can be maintained in one of two modes, synchronous or asynchronous. The definition of an asynchronous remote copy needs to be supplemented with some measure describing the maximum degree of asynchronicity. With the SVC, Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous remote copy and asynchronous remote copy respectively. Synchronous remote copy ensures that updates are committed at both primary and secondary before the applications given completion to an update. This ensures that the secondary is fully up-to-date should it be needed in a failover. However, this means that the application is fully exposed to the latency and bandwidth limitations of the communication link to the secondary. Where this is truly 'remote' this extra latency can have a significant adverse effect on application performance. SVC assumes that the FC fabric to which it is attached contains hardware which achieves the long distance requirement for the application. This hardware makes storage at a distance accessible as if it were local storage. Specifically, it enables a group of up to four SVC clusters to connect (FC login) to each other and establish communications in the same way as if they were located nearby on the same fabric. The only difference is in the expected latency of that communication, the bandwidth capability of the links and the availability of the links as compared with the local fabric. Special configuration guidelines exist for SAN fabrics used for data replication. Issues to keep an eye on are distance and bandwidth of the site interconnections. In asynchronous remote copy the application is given completion to an update before that update has necessarily been committed at the secondary. Hence, on a failover, some updates might be missing at the secondary. The application must have some external mechanism for recovering the missing updates and re-applying them. This mechanism may involve user intervention. Asynchronous remote copy provides comparable functionality to a continuous backup process which is missing the last few updates. Recovery on the secondary site involves bringing up the application on this recent 'backup' and then re-applying the most recent updates to bring the secondary up to date. The asynchronous remote copy must present at the secondary a view to the application that might not contain the latest updates, but is always consistent. If consistency has to be guaranteed at the secondary applying updates in an arbitrary order is not an option. The reason is that, at the primary side, the application is enforcing an ordering implicitly by not scheduling an I/O until a previous dependent I/O has completed. We cannot hope to know the actual ordering constraints of the application. The best that can be achieved is to choose an ordering which the application might see if I/O at the primary were stopped at a suitable point. One such example would be to apply I/Os at the secondary in the order they were completed at the primary. Thus the secondary always reflects a state that could have been seen at the primary if we froze I/O there. The SVC Global Mirror protocol operates to identify small groups of I/Os which are known to be active concurrently in the primary cluster. The process to identify these groups of I/Os does not significantly contribute to the latency of these I/Os when they execute at the primary. 32
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

These groups are applied at the secondary in the order which they were executed at the primary. By identifying groups of I/Os which can be applied concurrently at the secondary, the protocol maintains good throughput as the system size grows. The relationship between the two copies is not symmetric. One copy of the data set is considered the primary copy (sometimes also known as the 'source'). This copy provides the reference for normal run-time operation. Updates to this copy are shadowed to a secondary copy (sometimes known as the 'destination' or even target). The secondary copy is not normally referenced for performing I/O. If the primary copy should fail the secondary copy can be enabled for I/O operation. A typical use of this function may involve two sites where the first provides service during normal running and the second is only activated when a failure of the first site is detected. The secondary copy is not accessible for application I/O other than the I/Os that are performed for the remote copy process itself. SVC allows read-only access to the secondary storage when it contains a consistent image. This is only intended to allow boot time operating system discovery to complete without error so that any hosts at the secondary site can be ready to start up the applications with minimum delay if required. For instance, many operating systems need to read LBA 0 to configure a Logical Unit. 'Enabling' the secondary copy for active operation will require some SVC, operating system and possibly application specific work. This needs to be performed as part of the entire failover process. The SVC software at the secondary must be instructed to Stop the Relationship which will have the affect of making the secondary logical unit accessible for normal I/O access. The operating system might need to mount filesystems, or similar work, which can typically only happen when the logical unit is accessible for writes. The application might have some log of work to recover. Note that this property of Remote Copy, the requirement to enable the secondary copy, differentiates it from RAID 1 mirroring. The latter aims to emulate a single, reliable disk regardless of what system is accessing it. Remote Copy retains the property that there are two volumes in existence but suppresses one while the copy is being maintained. The underlying storage at the primary or secondary of a remote copy will normally be RAID storage but may be any storage which can be managed by the SVC. Making use of a secondary copy will involve some conscious policy decision by a user that a failover is required. The application work involved in establishing operation on the secondary copy is substantial. The goal is to make this rapid but not seamless; rapid means very much faster compared to recovering from a backup copy. Most customers will aim to automate this through failover management software. SVC provides SNMP traps and interfaces to enable this automation. IBMs Support for automation is provided by IBM Tivoli Storage Productivity Center for Replication. More info can be found at: http://www-03.ibm.com/systems/storage/software/center/replication/index.html or access the documentation online at the IBM Tivoli Storage Productivity Center information center on: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

2.2.16 FlashCopy
FlashCopy makes a copy of a Source VDisk to a Target VDisk. The original contents of the Target VDisk is lost. After the copy operation has started, the Target VDisk has the contents
Chapter 2. IBM System Storage SAN Volume Controller overview

33

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

of the Source VDisk as it existed at a single point in time. That is to say that although the copy operation does in fact take infinite time, the resulting data at the target appears as if the copy was made instantaneously. FlashCopy can be operated on multiple Source and Target Virtual Disks. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying Target Virtual Disks from their respective Source Virtual Disks. This allows a consistent copy of data which spans multiple Virtual Disks. SVC also permits multiple Target VDisks to be FlashCopied from each Source VDisk. This will often be used to create images from different points in time for each Source VDisk but it will also be possible to create multiple images form a Source VDisk at a common point in time as described previously. Source and Target VDisks may be space efficient VDisks. Starting with SVC 5.1, Reverse FlashCopy is supported. This enables Target VDisks to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. SVC supports multiple targets and thus multiple rollback points. FlashCopy is sometimes described as an instance of a Time-Zero copy (T0) or Point in Time (PiT) copy technology. Although the FlashCopy operation takes a finite time, this time is several orders of magnitude less than the time which would be required to copy the data using conventional techniques. Most customers will aim to integrate the FlashCopy feature for point in time copies, and quick recovery of their applications and/or databases. IBMs support for this is provided by Tivoli Storage FlashCopy Manager. Info can be found at: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/ A detailed description of copy services as Data Mirroring and FlashCopy can be found in ok <XXXX Ignorer Chapter 7> . Aspects of Data Migration are covered in <XXXX JonRef Chapter 6>.

2.3 SVC cluster overview


In simple terms, a cluster is a collection of servers that, together, provide a set of resources to a client. The key point is that the client has no knowledge of the underlying physical hardware of the cluster. This means that the client is isolated and protected from changes to the physical hardware, which brings a number of benefits. Perhaps the most important of these benefits is high availability. Resources on clustered servers act as highly available versions of unclustered resources. If a node (an individual computer) in the cluster is unavailable, or too busy to respond to a request for a resource, the request is transparently passed to another node capable of processing it, so clients are unaware of the exact locations of the resources they are using. For example, a client can request the use of an application without being concerned about either where the application resides or which physical server is processing the request. The user simply gains access to the application in a timely and reliable manner. Another benefit is scalability: if you need to add users or applications to your system and want performance to be maintained at existing levels, additional systems can be incorporated into the cluster. The SVC is a collection of up to eight cluster nodes, added in pairs. In future releases, the cluster size may be increased to permit further performance scalability. These nodes are

34

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

managed as a set (cluster) and present a single point of control to the administrator for configuration and service activity. The actual eight node limit within an SVC cluster is a limitation of the actual product, not an architectural one. Larger clusters are possible without changing the underlying architecture. SVC demonstrated its ability to scale during the recently run Quicksilver project and more details can be found at: http://www-03.ibm.com/press/us/en/pressrelease/24996.wss Based on a fourteen node cluster, coupled with Solid State Drive controllers a data rate of over one million Input/Output (I/O) per second -- with a response time of under one millisecond (ms) was achieved. Although the SAN Volume Controller code is based on a purpose optimized Linux kernel, the clustering feature is not based on Linux clustering code. The cluster software used within SVC, i.e. the event manager cluster framework, is based on the outcome of the COMPASS research project. It is the key element to isolate the SVC application from the underlying hardware nodes, makes the code portable and provides the means to keep the single instances of the SVC code running on the different cluster nodes in sync. Node restarts (during a code upgrade), adding new, or removing old nodes from a cluster or node failures cannot therefore impact SVCs availability. It is key for a cluster, i.e. all active nodes of a cluster, to know that they are members of the cluster. Especially in situations such as the split brain scenario where single nodes lose contact to other nodes and cannot determine if the other nodes can be reached anymore, it is key to have a solid mechanism to decide which nodes form the active cluster. A worst case scenario could be a cluster that splits into two separate clusters. Within an SVC cluster the so called Voting Set and an optional Quorum Disk are responsible for the integrity of the cluster. If nodes are added to a cluster they get added to the voting set, if they are removed they will also quickly be removed from it. Over time, the voting set, and hence the nodes in the cluster can completely change such that the cluster ends up having migrated onto a completely different set of nodes from the set it started on. Within an SVC cluster the quorum is defined as: 1. More than half the nodes in the voting set, or 2. Exactly half the nodes in the voting set and the quorum disk from the voting set, or 3. When there is no quorum disk in the voting set, exactly half of the nodes in the voting set if that half includes the Node which appears first in the voting Set (nodes are entered into the voting set in the first available free slot). These rules guarantee that there is only ever at most one group of nodes able to operate as the cluster, so the cluster never splits into two. The SVC cluster implements a dynamic quorum. This means that following a loss of some nodes, if the cluster can continue operation, then the cluster will adjust the quorum requirement, such that further node failure can be tolerated. The lowest Node Unique ID in a cluster becomes the boss node for the group of nodes and proceeds to determine (from the quorum rules) whether the nodes can operate as the cluster. This node also presents the maximum two cluster IP addresses on one or both of its nodes ethernet ports in order to allow access for cluster management.
Chapter 2. IBM System Storage SAN Volume Controller overview

35

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

2.3.1 Quorum disks


The cluster uses the quorum disk for two purposes: as a tie breaker in the event of a SAN fault, when exactly half of the nodes that were previously a member of the cluster are present, and to hold a copy of important cluster configuration data. Just over 256 MB is reserved for this purpose on each quorum disk candidate. There is only one active quorum disk in a cluster; however, the cluster uses three managed disks as quorum disk candidates. The cluster automatically selects the actual active quorum disk from the pool of assigned quorum disk candidates. If a tie-breaker condition occurs, the one half of the cluster nodes which is able to reserve the quorum disk after the split has occurred locks the disk and continues to operate. The other half stops its operation. This prevents both sides from becoming inconsistent with each other. When MDisks are added to the SVC cluster, it checks the MDisk to see if it can be used as a quorum disk. If the MDisk fulfils the requirements, the SVC will assign the three first MDisks added to the cluster as quorum candidates. One of them is selected as the active quorum disk. Note: To be considered eligible as a Quorum disk, an LUN must meet the following criteria: It must be presented by a disk subsystem that is supported to provide SVC quorum disks It cannot be allocated on one of node internal Flash disks it have been manually allowed to be quorum disk candidate using the svctask chcontroller -allow_quorum yes command It must be in managed mode (no image mode disks) It must have sufficient free extents to hold the cluster state information, plus the configuration meta-data stored. It must be visible to all nodes in the cluster If possible, the SVC will place the quorum candidates on different disk subsystems. Once the quorum disk has been selected, however, no attempt is made to ensure that the other quorum candidates are presented through different disk subsystems. With SVC 5.1 quorum disk candidates can and the active quorum disk in a cluster be listed by the svcinfo lsquorum command. When the set of quorum disk candidates has been chosen, it is fixed. A new quorum disk candidate will only be chosen if: The administrator requests that a specific MDisk becomes a quorum disk using the svctask setquorum command. An MDisk that is a quorum disk is deleted from an MDG. An MDisk that is a quorum disk changes to image mode. An offline MDisk will not be replaced as a quorum disk candidate. A cluster should be regarded as a single entity for disaster recovery purposes. This means that the cluster and the quorum disk should be co-located.

36

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Special considerations concerning the placement of the active quorum disk have to be taken into account for stretched cluster, i.e stretched I/O Group, configurations. Details are available at: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311 Note: Running an SVC cluster without a quorum disk could seriously impact your operation. A lack of available quorum disks for storing metadata will prevent any migration operation (including a forced MDisk delete). Mirrored VDisks may be taken offline if there is no quorum disk available. This behavior occurs because synchronization status for mirrored VDisks is recorded on the quorum disk. During normal operation of the cluster, the nodes communicate with each other. If a node is idle for a few seconds, then a heartbeat signal is sent to ensure connectivity with the cluster. Should a node fail for any reason, the workload intended for it is taken over by another node until the failed node has been restarted and re-admitted to the cluster (which happens automatically). In the event that the microcode on a node becomes corrupted, resulting in a failure, the workload is transferred to another node. The code on the failed node is repaired, and the node is re-admitted to the cluster (again, all automatically).

2.3.2 I/O Groups


For I/O purposes, SAN Volume Controller nodes within the cluster are grouped into pairs, called I/O groups, with a single pair being responsible for serving I/O on a given VDisk. One node within the I/O group represents the preferred path for I/O to a given VDisk. The other node provides the failover path. This preference alternates between nodes as each VDisk is created within an I/O group. This is a basic approach to balance the workload evenly between the two nodes. Note: The preferred node by no means signifies absolute ownership. The data can still be accessed by the partner node in the I/O group in the event of a failure.

2.3.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level. This can result in anything from one to 10ms of response time (for an enterprise class disk). The new 2145-CF8 nodes combined with SVC 5.1 provide 24 GB memory per node, i.e 48 GB per I/O Group or 192 GB per SVC cluster. SVC provides a flexible cache model and the nodes memory can be used as Read or Write Cache. The size of the Write Cache is limited to a maximum of 12 GB of the nodes memory. Dependent on the current I/O situation on a node, the free part of the memory (maximum 24 GB), may be fully used as Read Cache. Cache is allocated in 4KB pages. A page belongs to one track. A track is the unit of locking and destage granularity in the cache. It is 32 KB in size (eight pages). A track might only be partially populated with valid pages. The SVC coalesces writes up to the 32 KB track size if writes reside in the same tracks prior to destage, for example, if 4 KB is written into a track, another 4 KB is written to another location in the same track. Because of this the blocks written from the SVC to the disk subsystem can have any size between 512 bytes up to 32KB.

Chapter 2. IBM System Storage SAN Volume Controller overview

37

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

When data is written by the host, the preferred node within the I/O group saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, i.e copied in the cache of its partner node, for availability reasons. After having a copy of the written data Write data held in cache is not destaged to disk; therefore if only one copy of the data is kept you risk losing data. Write Cache entries without updates during the last two minutes are automatically destaged to disk. If one node of an I/O group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in a operation mode which is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an I/O complete status back to the host. While running in this mode, the performance of the specific I/O Group can be degraded. Starting with SVC Version 4.2.1 (write-) cache partitioning was introduced to the SVC. This feature restricts the max. amount of write cache a single MDG can allocate in a cluster. Table Table 2-2 shows the upper limit of write cache data that any one MDG in a cluster can occupy.
Table 2-2 upper limit of write cache per MDG One MDG 100% Two MDGs 66% Three MDGs 40% Four MDGs 33% > Four MDGs 25%

For in-depth information about SVC cache partitioning, we strongly recommend the following IBM Redpaper publication: IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open An SVC node is able to treat some or all of its physical memory as being non-volatile. Non-volatile means that it contents are preserved across power losses and resets. Besides the bitmaps for Flash Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are the most important items in the Non-volatile memory. The actual amount which can be treated as non-volatile is dependent on the hardware. In the event of a disruption or external power loss, the physical memory is copied to a file in the filesystem on the nodes internal disk drive, such that the contents can be recovered when external power is restored. The UPS which are delivered with each node hardware ensure that there is sufficient internal power to keep a node operational to perform this dump when external power is removed. After dumping the content of the Non-volatile part of the memory to disk the SAN Volume Controller node shuts down.

2.3.4 Cluster management


The IBM System Storage SAN Volume Controller can be managed by one of the following three interfaces: A textual Command Line Interface (CLI) accessed via a Secure Shell connection (SSH). A Web browser based graphical user interface (GUI) written as a CIM Client (ICAT) using the SVC CIMOM. It supports flexible and rapid access to storage management information. A CIMOM which can be used write alternative CIM Clients (such as IBM System Storage Productivity Center).

38

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Starting with SVC release 4.3.1, the SVC Console (ICAT) can use the CIM agent that is embedded in the SAN Volume Controller cluster. With release 5.1 of the code using the embedded CIMOM is mandatory. This CIMOM will support the SMI-S Version 1.3 standard.

User account migration


During the upgrade from SAN Volume Controller Console version 4.3.1 to version 5.1, the installation program attempts to migrate user accounts currently defined to the CIMOM on the cluster. If the migration of those accounts fail with the installation program, you can manually migrate the user accounts with the help of a script. Details can be found in the SVC Software Installation and Configuration Guide, SC23-6628-04.

Hardware Management Console


The management console for SVC is referred to as the IBM System Storage Productivity Center (SSPC). IBM System Storage Productivity Center is a hardware and software solution that includes a suite of storage infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments.

SSPC
SSPC is based on server HW (IBM x-series based) and a set of preinstalled and optional SW modules. Some of these preinstalled modules provide base functionality only, or are not activated. These modules, or the enhanced functionalities, can be activated by adding separate licences. An overview of the functions: Tivoli Integrated Portal IBM Tivoli Integrated Portal is a standards-based architecture for Web administration. The installation of Tivoli Integrated Portal is required to enable single sign-on for Tivoli Storage Productivity Center. Tivoli Storage Productivity Center now installs Tivoli Integrated Portal along with Tivoli Storage Productivity Center. Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center Basic Edition 4.1.0 is preinstalled on the SSPC server. There are several different commercially available packages of Tivoli Storage Productivity Center that provide additional functionality beyond Tivoli Storage Productivity Center Basic Edition. These packages can be activated by adding the specific licences to the preinstalled Basic Edition: Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for performance. Tivoli Storage Productivity Center for Data allows you to collect and monitor file systems and databases. Tivoli Storage Productivity Center Standard Edition is a bundle that includes all other packages, along with SAN planning tools that make use of information collected from the Tivoli Storage Productivity Center components. Tivoli Storage Productivity Center for Replication The basic functions of Tivoli Storage Productivity Center for Replication provide management of IBM FlashCopy, Metro Mirror and Global Mirror capabilities for the IBM ESS Model 800, IBM DS6000, DS8000, and IBM SAN Volume Controller. This package can be activated by adding the specific licences. SVC GUI (ICAT) SSH Client (PuTTY) Windows 2008 Enterprise Edition
Chapter 2. IBM System Storage SAN Volume Controller overview

39

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Several base software packets required for TPC Optional software packages such as anti-virus software or DS3000/4000/5000 Storage Manager may be installed on the SSPC server by the customer. An overview on the SVC management components is given in Figure 2-13. Details are described in Chapter 4, SVC initial configuration on page 101. Details about the SSPC can be found in IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336-03.

Figure 2-13 SAN Volume Controller Management Overview

2.3.5 User Authentication


With SVC 5.1 there are several changes concerning user authentication for an SVC cluster that have been introduced to make it simpler. Earlier SVC releases authenticated all users locally. SVC 5.1 has two authentication methods: Local authentication Local authentication is similar to the existing method and will be described below. Remote authentication. Remote authentication supports the use of a remote authentication server to validate the passwords. For SVC this will be the Tivoli Embedded Security Services (ESS). The ESS is part of the Tivoli Integrated Portal (TIP) which is one of the three components that come with TPC 4.1 (TPC, TPC-R, TIP) that are preinstalled on the SSPC 1.4, i.e the management console for SVC 5.1 clusters. Each SVC cluster can have multiple users defined. The cluster maintains an audit log of successfully executed commands, indicating which users made what actions at what times. Usernames may contain only printable ASCII characters:

40

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Forbidden characters are: (single quote),: (colon),% (percent), * (asterisk),, (comma), and (double quote); A user name cannot begin or end with a blank. Passwords for local users do not have any forbidden characters; but passwords cannot begin or end with blanks.

SVC superuser
There is a special local user called the superuser that always exists on every cluster. It cannot be deleted. Its password is set by the user during cluster initialization. The superuser password can be reset from the nodes front panel and this function can be disabled, although this will leave the cluster inaccessible should all users forget their passwords or lose their SSH keys. The superusers password supersedes the cluster administrator password present in previous software releases. To register an SSH key for the superuser in order to provide command line access, the GUI has to be used, usually at the end of the cluster initialization process but can also be added later. The superuser is always a member of user group 0 which has the most privileged role within the SVC.

2.3.6 SVC roles and user groups


Each user group is associated with a single role. The role for a user group cannot be changed but additional new user groups (with one of the defined roles) can be created. User groups are used for local and remote authentication. Because SVC knows of six different roles, there are by default (see Table 2-3) six different user groups defined in an SVC cluster.
Table 2-3 SVC Default User Groups User Group ID 0 1 2 3 4 User Group SecurityAdmin Administrator CopyOperator Service Monitor Role SecurityAdmin Administrator CopyOperator Service Monitor

The access rights for a user belonging to a specific user group are defined by the role that is assigned to the user group. It is the role that defines what a user can do (or cannot do) on an SVC cluster. Table 2-4 on page 42 shows the roles ordered (from the top) by starting with the least privileged Monitor role down to the most privileged SecurityAdmin role.

Chapter 2. IBM System Storage SAN Volume Controller overview

41

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Table 2-4 Overview of SVC roles Role Monitor

Allowed commands
All svcinfo commands. svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser svcconfig: backup All commands allowed for Monitor role plus: svctask: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, settime All commands allowed for Monitor role plus: svctask: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, chpartnership All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset

Service

CopyOperator

Operator

Administrator

SecurityAdmin

2.3.7 SVC local authentication


Local users are those managed entirely on the cluster without the intervention of a remote authentication service. Local users must have either a password, an SSH public key, or both. The password is used for authentication and the SSH key for command line or file transfer (SecureCopy) access. This means that for local users, only if a password is specified, can the user can SVC cluster via the GUI. Note: Be aware that local users are created per each SVC cluster. Each user has a name which must be unique across all users in one cluster. If you want to allow access for a user on different clusters you have to define it in each cluster with the same name and the same privileges. A local user belongs always to one and only one user group. Figure 2-14 on page 43 shows an overview of local authentication within the SVC.

42

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Figure 2-14 Simplified overview of SVC local authentication

2.3.8 SVC remote authentication and Single Sign On


An SVC cluster can be configured to use a remote authentication service. Remote users are those that are managed by the remote authentication service and require command line or file-transfer access. Remote users only have to be defined in the SVC if command line access is required. In that case the remote authentication flag has to be set and an SSH key and its password has to be defined for this user. Keep in mind that for users requiring CLI access with remote authentication, defining the password locally for this user is mandatory. Remote users may not belong to any user group because the remote authentication service, e.g. an LDAP directory server such as IBM Tivoli Directory Server or Microsoft Active Directory, will deliver the user group information. Upgrade from SVC 4.3.1 is seamless. Existing users and roles are migrated without interruption. Remote authentication can be enabled once upgrade is complete. Figure 2-15 on page 44 gives an overview of SVC remote authentication.

Chapter 2. IBM System Storage SAN Volume Controller overview

43

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 2-15 Simplified overview of SVC 5.1 remote authentication

The authentication service supported by SVC is the Tivoli Embedded Security Services (ESS) server component level 6.2. The ESS server provides the following two key features: 1. ESS isolates the SVC from the actual directory protocol in use This means the SVC communicates only with the ESS to get its authentication information. The type of protocol used to access the central directory or the kind of the directory system used is transparent to SVC. 2. ESS provides a secure token facility that is used to enable single sign-on (SSO) SSO means that users should not have to log in multiple times when using what appears to them to be a single system. It is used within TPC, in other words, when the SVC Console is launched from within TPC, the user will not have to log on to the SVC Console because they have already logged in to TPC. With reference to Figure 2-16 on page 45, the user starts application A with a username and password (1), which is authenticated using the ESS server (2). The server returns a token (3) which is an opaque string that can only be interpreted by the ESS server. The server also supplies the user's groups and an expiry timestamp for the token. The client device (SVC in our case) is responsible for mapping an ESS user group to roles. Application A needs to launch application B. Instead of getting the user to enter a new password to authenticate to application B, A passes B the ESS token (4). Application B passes the ESS token to the ESS server (5), who decodes the token and returns the user's ID and groups to application B (6) along with an expiry timestamp.

44

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

2: auth( u, p ) 1: login( u, p )

Application A
3: auth_ok( tk, ts, g )

4: launch( tk )

ESS Server

LDAP Server

5: auth( tk )

Application A
6: auth_ok( tk, ts, u, g )

Figure 2-16 SSO with ESS

The token expiry timestamp is advice to the ESS client applications A and B about credential caching. The applications are permitted to cache and use a token or username-password combination until the timestamp expires and is returned by the server. So in the our example, Application B could cache the fact that a particular token maps to a particular user ID and groups. This is a performance boost as it saves the latency of querying the ESS server on each interaction between A and B. Once the lifetime of the token has expired, the Application A must query the server again and obtain a new timestamp in order to rejuvenate the token (or alternatively discover that the credentials are now invalid). The ESS server administrator can configure the length of time used to set expiry timestamps. This system is only effective if the ESS server and the applications have synchronized clocks.

Using a remote authentication service


The steps to use SVC with a remote authentication service are: 1. Configure the cluster with the location of the remote authentication server. Settings can be changed with: svctask chauthservice....... and viewed with: svcinfo lscluster........ SVC supports either an HTTP or HTTPS connection to the ESS server. If the HTTP option is used then user and password information is transmitted in clear text over the IP network. 2. Configure user groups on the cluster matching those used by the authentication service. For each group of interest known to the authentication service, there must be an SVC user group with the same name and the remote setting enabled. For example there could be a group called sysadmins whose members require the SVC Administrator role. This should be configured using the command: svctask mkusergrp -name sysadmins -remote -role Administrator.

Chapter 2. IBM System Storage SAN Volume Controller overview

45

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

If none of a users groups match any of the SVC user groups then the user is not permitted to access the cluster. 3. Configure users that do not require SSH access. Any SVC users that are to be used with the remote authentication service and do not require SSH access should be deleted from the system. The superuser cannot be deleted, it is a local user and cannot use the remote authentication service. 4. Configure users that do require SSH access. Any SVC users that are to be used with the remote authentication service and do require SSH access must have their remote setting enabled and the same password set on the cluster and the authentication service. The remote setting instructs SVC to consult the authentication service for group information after the SSH key authentication step to determine the users role. The need to configure the users password on the cluster in addition to the authentication service is due to a limitation in the ESS server software. 5. Configure the system time. For correct operation both the SVC cluster and the system running the ESS server must have the exact same view of the current time. The easiest way to do this is for them both to use the same NTP server. Failure to follow this step could lead to poor interactive performance of the SVC user interface or incorrect user-role assignments. Also TPC 4.1 leverages the TIP infrastructure and its underlying WebSphere Application Server capabilities to make use of an LDAP registry and enable Single Sign On (SSO). More information on how SSO is implemented within TPC 4.1 can be found in Chapter 6 (LDAP authentication support and single sign-on) of the IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG247725 at: http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open

2.4 SVC hardware overview


The SVC 5.1 release will also provide new, more powerful hardware nodes. Also these new nodes will be, as defined in the underlying COMPASS architecture, based on Intel processors with standard PCI-X adapters to interface to the SAN and LAN. The key hardware features of the new SVC 2145-CF8 Storage Engine are: New SVC engine based on Intel Core i7 2.4 GHz quad-core processor 24 GB memory, with future growth possibilities Four 8 Gbps FC ports Up to four Solid State Drives enabling scale-out high performance SSD support with SVC Two power supplies Double bandwidth compared to its predecessor node (2145-8G4) Up to double IOPS compared to its predecessor node (2145-8G4) A 19-inch rack-mounted enclosure IBM Systems Director Active Energy Manager enabled The new nodes can be smoothly integrated within existing SVC clusters. New nodes may be intermixed in pairs within existing SVC clusters. Mixing engine types in a cluster results in VDisk throughput characteristics of the engine type in that I/O group. The cluster

46

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

non-disruptive upgrade capability may be used to replace older engines with new 2145-CF8 engines. They are 1U high, fit into 19 racks and use the same UPS models as previous models. Integration into existing clusters requires that the cluster runs SVC 5.1 code. The only node that does not support SVC 5.1 code is the 2145-4F2 type node. An upgrade scenario for SVC clusters based, or containing, these 1st generation nodes will be available later this year. Figure 2-17 shows the front-side view of the new SVC 2145-CF8 node:

Figure 2-17 The SVC 2145-CF8 Storage Engine

Bear in mind that some of the new features in the new SVC 5.1 release, such as iSCSI, are software features and are therefore available on all nodes supporting this release.

2.4.1 Fibre Channel interfaces


The IBM SAN Volume Controller provides the following FC interfaces on the different node types: Supported link speed of 2/4/8 Gbps: on SVC 2145-CF8 nodes Supported link speed of 1/2/4 Gbps: on SVC 2145-8G4, SVC 2145-8A4, SVC 2145-8F4 nodes The nodes come with a 4-port HBA. The FC ports on these node types autonegotiate the link speed that is used with the FC switch. The ports normally operate at the maximum speed that is supported by both the SVC port and the switch. However, if a large number of link errors occur, the ports might operate at a lower speed than what is supported. The actual port speed for each of the four ports can be displayed via the GUI, the CLI, the nodes front panel, and also by LEDs placed at the rear of the node. For details please consult the node specific SVC Hardware Installation Guides: IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356 IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide, GC27-2219 IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220 IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware Installation Guide, GC27-2221

Chapter 2. IBM System Storage SAN Volume Controller overview

47

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

The SVC imposes no limit on the FC optical distance between SAN Volume Controller nodes and host servers. Fibre Channel standards, along with SFP capabilities and cable type, dictate the maximum FC distances supported. If you use longwave SFPs in the SVC node itself, the longest supported FC link between the SVC and switch is 10km. The actual cable length supported with shortwave SFPs is shown in Table 2-5.
Table 2-5 Overview of supported cable length FC-O OM1 (M6) Standard 62.2/125 m 150 m 70 m 21 m OM2 (M5) Standard 50/125 m 300 m 150 m 50 m OM3 (M5E) optimized 50/125 m-300 500 m 380 m 150 m

2 Gbps FC 4 Gbps FC 8 Gbps FC limiting

With respect to the number of ISL hops allowed in a SAN fabric between SVC nodes or cluster, the rules defined in Table 2-6 apply.
Table 2-6 Number of supported ISL hops Between nodes in an I/O Group 0 (connect to same switch) Between nodes in different I/O Groups 1 (recommended:0), (connect to same switch) Between nodes and disk subsystem 1 (recommended: 0, (connect to same switch) Between nodes and host server max. 3

2.4.2 LAN Interfaces


The 2145-CF8 node supports (as its predecessor nodes did), two 1 Gbps LAN ports. In SVC 4.3.1 and before, the SVC cluster presented a single IP interface. This was used by the SVC configuration interfaces (CLI, CIMOM). Although multiple physical nodes were present in the SVC cluster, only a single node (the configuration node), was active on the IP network. This configuration IP address was presented from the eth0 port of the configuration node. If the configuration node failed, then a different node in the cluster took over the duties of the configuration node and the IP address for the cluster was then presented at the eth0 port of that new configuration. The configuration node supported concurrent access on ipv4 and ivp6 configuration addresses on the eth0 port from SVC 4.3 onwards. Starting with SVC 5.1, the cluster configuration node may now be accessed on either eth0 or eth1. This means the cluster may have two ipv4 and two ipv6 addresses that are used for configuration purposes (CLI or CIMOM access). The cluster may therefore be managed by SSH clients or GUIs on SSPCs on separate physical IP networks this provides redundancy in the event of a failure of one of these IP networks. Support for iSCSI introduces one additional ipv4 and one additional ipv6 address for each SVC node port these IP addresses are independent of the cluster configuration IP addresses. An overview is shown in Figure 2-12 on page 29.

48

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

2.5 Solid State Drives


Solid State Drives (SSD), or to be more specific single layer cell (SLC) or multi layer cell (MLC) NAND Flash based disks (for the sake of simplicity referenced to as SSD in the following chapters) are one of the steps used to overcome a problem that has grown continuously over the last few decades, and this is something known as the Memory/Storage Bottleneck.

2.5.1 Storage bottleneck problem


The Memory/Storage Bottleneck describes the steadily growing gap between the time required for a CPU to access data located in its cache/memory (typically in nanoseconds) and data located on external storage (typicall in milliseconds). While CPUs and cache/memory devices continually improve their performance, for reasons described later this is not true in general for mechanical disks nowadays used as external storage. Figure 2-18 on page 49 shows these access time differences.

Figure 2-18 The Memory/Storage bottleneck

The single times shown are not that important, but we draw your attention to the time differences between accessing data located in cache and data located on external disk. As an aid, we have added a second scale to it which gives you an impression of how long it would take to access the data in the hypothetical case that a single CPU cycle would take 1 second. This should help you to get an even better feeling of how important it is for future storage technologies to close or reduce the gap between access times for data stored in cache/memory versus access times for data stored on a external medium. Since magnetic disks were first introduced by IBM in 1956 (RAMAC) they have shown a remarkable performance regarding capacity growth, form factor / size reduction, price decrease ($/GB) and reliability. On the other hand the number of I/Os that a disk can handle and the response time it takes to process a single I/O on it have not increased at the same rate although they have certainly increased it is not at such a quick rate. In real environments we can expect from todays 49

Chapter 2. IBM System Storage SAN Volume Controller overview

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

enterprise class FC SAS disk up to 200 IOPS per disk with an average response time, i.e a latency, of approximately 7 ms per I/O. To simplify it, one can state today that rotating disks are getting, and still will, bigger in capacity (several TBs), smaller in form factor/footprint (3.5, 2.5, 1.8) and cheaper ($/GB) but not necessarily faster. The limiting factor is the number of revolutions per minute (rpm) a disk can perform (actually 15000). This factor mainly defines the time that is required to access a specific data block on such a rotating device. There might be smaller improvements in the future but a big step such as doubling the number of revolutions would, if technically possible at all, inevitably be accompanied by a massive increase in power consumption and a price increase for such a device.

2.5.2 SSD solution


The SSD can provide a solution for this dilemma. No rotating parts mean improved robustness, lower power consumption, and a remarkable improvement in I/O performance and a massive reduction in the average I/O response times (latency) are the compelling reasons to use SSD in todays storage subsystems. Enterprise class SSDs deliver typical 50,000 Read and 20,000 Write IOPS with latencies of typically 50us for Reads and 800us for Writes. Their form factors (2.5/3.5) and their interfaces (FC/SAS/SATA) make them easy to integrate into existing disk shelves. Note: Specific performance problems might be solved by carefully adding some SSDs to an existing disk subsystem. But be aware, solving performance problems by using SSDs excessively in existing disk subsystems will inevitably create performance bottlenecks on the underlying RAID controllers.

2.5.3 SSD market


The SSD storage market is a rapidly evolving one. The key differentiator between todays SSD products that are available on the market is not the storage medium itself, but the logic in disk internal controllers. The optimal handling of what is referred to as wear-out leveling which mainly defines the controllers capability to ensure a devices durability and closing the remarkable gap between Read and Write I/O performance are the top priorities in todays controller development. Todays SSD technology is just a first step into the world of high performance persistent semi-conductor storage. A group of the approximately 10 most promising technologies for the are collectively referenced to as Storage Class Memory (SCM).

Storage Class Memory


SCM again promises a massive improvement in performance (IOPS), areal density, cost and energy efficiency compared to todays SSDs technology. IBM Research is actively engaged in these new technologies. Details of Nanoscale Devices can be found here: http://www.almaden.ibm.com/st/nanoscale_st/nano_devices/ Details of Storage Class Memory can be found here: http://tinyurl.com/plk7as 50
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

A comprehensive and in our opinion a well worth reading overview of the SSD technology can be found in a subset of the well known Spring 2009 SNIA Technical Tutorials. They are available on the SNIA Web site on: http://www.snia.org/education/tutorials/2009/spring/solid When will this all happen? As always, it depends, or to say it with Nils Bohr: Prediction is very difficult especially if it is about the future. But what we can say at this time is: there is evidence that all these will become reality in the second half of this decade and, as you can imagine, it will change the architecture of todays storage infrastructures fundamentally. The IBM SAN Volume Controller and how the first releases of this new technology is integrated into the SVC is described in the following topic.

2.6 SSD in the SVC


The SSD provided in the new 2145-CF8 nodes provide a new ultra-high-performance storage option. They are available in the 2145-CF8 nodes only. SDDs can be pre-installed in the new nodes or installed on a per disk basis at a later point in time without interrupting service, as a field hardware upgrade. SSDs include the following features: Up to four SSDs can be installed on each SAN Volume Controller 2145-CF8 node. An IBM PCIe SAS host bus adapter (HBA) is required on each node that contains an SSD. Each SSD is a 2.5-inch Serial Attached SCSI (SAS) drive. Each SSD provides up to 140 GB of capacity. SSDs are hot-pluggable and hot-swappable. Up to four SSDs are supported per node. This will provide up to 560 GB usable SSD capacity per node. Always install the same amount of SDD capacity in both nodes of an I/O Group. In a cluster running 5.1 code node pairs with SDDs can be mixed with older node pairs either with or without local SSDs installed. This scalable architecture enables customers to take advantage of the throughput capabilities of SSD. The performance per I/O group (from SSDs only) is: IO/s: 200K reads, 80K writes, 56K 70/30 mix MB/s: 800MB/s reads, 400MB/s writes SDDs are local drives in an SVC node and are presented as MDisks to the SVC cluster. They belong to an SVC internal controller. These controller objects will have the WWNN of the node in question, but will be reported as standard controller objects which can be renamed by the user. SVC reserves eight of these controller objects for the internal SDD controllers. MDisks based on SDD can be identified by showing their attributes via GUI/CLI. For these MDisks the attributes Node ID and Node Name are set. In all other MDisk views these attributes will be blank.

2.6.1 SSD configuration rules


You must follow the SAN Volume Controller SSD configuration rules for nodes I/O Groups and clusters:

Chapter 2. IBM System Storage SAN Volume Controller overview

51

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Nodes that contain SSDs can coexist in a single SAN Volume Controller cluster with any other supported nodes. Do not combine nodes that contain SSDs and nodes that do not contain SSDs in a single I/O group. It is acceptable to temporarily mix node types in an I/O Group whilst upgrading SVC node hardware from an older model to the 2145-CF8. Nodes that contain SSDs in a single I/O group must share the same SSD capacities. Quorum functionality is not supported on SSDs within SAN Volume Controller nodes. You must follow the SAN Volume Controller SSD configuration rules for MDisks and MDisk groups: Each SSD is recognized by the cluster as a single MDisk. For each node that contains SSDs, create a single MDisk group that includes only the SSDs that are installed in that node. Terminology: An MDG using SSDs contained within an SVC node will be referenced as SVC SSD storage throughout this book. The configuration rules given in this book apply to SVC SSD storage. Do not confuse this term with SSD storage contained in SAN attached storage controllers, such as the IBM DS8000 or DS5000, are described elsewhere. When you add a new solid-state drive (SSD) to an MDisk group, i.e. move it from unmanaged to managed mode, the SSD is automatically formatted and set to a block size of 512 bytes You must follow these configuration rules for VDisks using storage from SSDs within SAN Volume Controller nodes. VDisks using SVC SSD storage must be created in the I/O group where the SSDs physically reside. VDisks using SVC SSD storage must be mirrored to another MDG to provide fault tolerance. Supported mirroring configurations are: For the highest performance, the two VDisk copies must be created in the two managed disk groups that correspond to the SVC SSD storage in two nodes in the same I/O group. The recommended SSD configuration for highest performance is shown in Figure 2-19 on page 53. For the best utilization of the SSD capacity, the primary vdisk copy must be placed on SVC SSD storage and the secondary copy placed may be placed on Tier 1 storage, such as an IBM DS8000. Under certain failure scenarios, the performance of the VDisk will degrade to the performance of the non-SSD storage. All read I/Os are sent to the primary copy of a mirrored VDisks, therefore reads will experience SSD performance. Writes I/Os are mirrored to both locations, so performance will match the speed of the slowest copy. The recommended SSD configuration for best SDD capacity utilization is shown in Figure 2-20 on page 54. To balance the read workload, evenly split the primary and secondary VDisk copies on each node that contains SSDs. The preferred node of the VDisk must be the same node which contains the SSDs used by the primary VDisk copy Important: For VDisks provisioned out of SVC SSD storage, VDisk Mirroring is mandatory in order to maintain access to the data stored on SSDs if one of the nodes in the I/O Group is being serviced or fails.

52

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Bear in mind that VDisks based on SVC SSD storage should always be presented by the I/O Group and, during normal operation, by the node the SSD belongs to. The rules above are designed to direct all host IO to the node containing the relevant SSDs. Existing VDisks can be migrated while online to SVC SSD storage. It therefore might be necessary that the VDisk has first to be moved into the right I/O Group, which requires quiescing I/O to this VDisk during the move. The recommended SSD configuration for highest performance is shown in Figure 2-19.

Figure 2-19 SSD configuration for highest performance

For a read intensive application, mirrored VDisks can keep their secondary copy on a SAN based Managed Disk Group. This could be, for example, an IBM DS8000 providing Tier 1 storage resources to an SVC cluster. Since all read IOs are sent to the primary copy (which would be set as the SSD) this will give reasonable performance as long as the tier 1 storage can sustain the write I/O rate. Performance will decrease if the primary copy fails. Again, ensure that the node that the primary VDisk copy resides on is also the preferred node for the VDisk. The recommended SSD configuration for best capacity utilization is shown in Figure 2-20 on page 54.

Chapter 2. IBM System Storage SAN Volume Controller overview

53

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 2-20 Recommended SSD configuration for best SDD capacity utilization

Bear the following points in mind when you are using SVC SSD storage: I/O requests to SSDs that are in other nodes are automatically forwarded. However this will introduce additional delays. Try to avoid these configurations by following the configuration rules given previously. Take care before migrating image mode VDisks to SVC SSD storage or deleting a copy of a mirrored VDisk based on SVC SSD storage. Because in all scenarios where your data is stored in one single SSD based MDG only, its not protected against node or disk failures anymore. If you delete or replace nodes containing local SDDs from a cluster remember that the data stored on its SSDs may have to be decommissioned. If you shut down a node that contains SVC SSD storage containing VDisks without mirrors on another node or storage system, you will lose access to any VDisks that are associated with that SVC SSD storage. A force option is provided to prevent an unintended loss of access. The SVC 5.1 code will provide the functionality to upgrade the SSDs firmware and FPGA code. Details and how-tos can be found in the IBM System Storage SAN Volume Controller Software Installation and Configuration Guide Version, SC23-6628.

2.6.2 SVC 5.1 supported hardware list, device driver and firmware levels
With the SVC 5.1 release, as in every release, IBM offers functional enhancements and/or new hardware that can be integrated into existing or new SVC clusters, but also a number of 54
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

interoperability enhancements or new support for servers, SAN switches and disk subsystems. The most current information can be found at: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277

2.6.3 What was new with SVC 4.3.1


Before we are introduce the new features of SVC 5.1 we will recap the features that were added with release 4.3.1: New node type: 2145-8A4 The Entry Edition hardware comes with identical functionality as the 2145-8G4 nodes, 8 GB memory and four 4 Gbps FC interfaces. They provides approximately 60% of the performance of the actual 2145-8G4 nodes. This an ideal choice for entry level solutions with reduced performance requirements, but without any functional restrictions. It comes with a physical disk based licensing. Embedded CIMOM The CIMOM and associated SVC CIM Agent is the software component that provides the industry standard CIM protocol as a management interface to SVC. Up to SVC 4.3.0 the CIMOM ran on what was initially known as the SVC Master Console, but was replaced in SVC 4.2.0 by the SSPC based management console. The SSPC is an integrated package of hardware and software that provides all the management software (SVC CIMOM, SVC GUI) required to manage the SVC as well as components for managing other storage systems. Customers may continue to use either the Master Console or SSPC to manage SVC 4.3.1. In addition, the software components required to manage the SVC (SVC CIMOM and SVC GUI) are provided by IBM in software form allowing customers who have a suitable hardware platform to build their own Master Console. Note: With SVC 5.1 the usage of the embedded CIMOM is mandatory. We therefore recommend in an upgrade scenario to switch first the existing configurations from the Master Console/SSPC based CIMOM to the embedded CIMOM (remember to update the TPC configuration if its in use). Upgrade in a second step the Master Console/SSPC and finally do the upgrade of the SVC cluster. Windows 2008 support for the SVC GUI and Master Console SSPC 1.3 support NTP synchronization The SVC cluster time will operating in one of two exclusive modes: Default mode in which the cluster uses the configuration node's system clock. NTP mode; the cluster uses an NTP time server as its time source, and adjusts the config node's system clock according to time values obtained from the NTP server. Once operating in NTP mode, the SVC cluster will log an error if an NTP server is unavailable. Performance enhancement for overlapped Global Mirror writes

2.6.4 What is new with SVC 5.1


Most of the new features coming with SVC release 5.1 have already been described in this chapter. This is added for those that want a quick, short overview:
Chapter 2. IBM System Storage SAN Volume Controller overview

55

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

New hardware nodes (CF8) New SVC engine based on IBM System x3550M2 server Intel Core i7 2.4 GHz quad-core processor. It provides 24GB of cache (with future growth possibilities) and four 8Gbps FC ports. It provides support for Solid State Drives (up to four per SVC node) enabling scale-out high performance SSD support with SVC. The new nodes may be intermixed in pairs with other engines in SVC clusters. Details are described in 2.4, SVC hardware overview on page 46. 64-bit kernel Model 8F2 and later SVC software kernel upgraded to take advantage of the 64-bit hardware on SVC nodes. Model 4F2 is not supported with SVC 5.1 software, but will be supported with SVC 4.3.x software. 2145-8A4 is an effective replacement for the 4F2 and it doubles the performance of the 4F2. Going to 64-bit mode will improve performance capability. It allows for cache increase (24GB) in the 2145-CF8 and will be used in future SVC releases for cache increases and other expansion options. Solid State Disk support Optional Solid State Drives (SSD) in SVC engines provide new ultra-high-performance storage option. Up to four SSDs per node (140 GB each, larger in future) can be added to a node. This provides up to 540 GB usable SSD capacity per I/O group, or more than 2 TB to an eight node SVC cluster. The SVCs scalable architecture enables customers to take advantage of the throughput capabilities of SSD. SSDs fully integrated into SVC architecture. VDisks may be migrated to/from SSD VDisks without application disruption. FlashCopy may be used for backup or to copy data to SSD VDisks. Details are described in 2.5, Solid State Drives on page 49. iSCSI support Native attachment to SVC for host systems using the iSCSI protocol. This is a software feature. It will therefore be supported also on older SVC nodes that support SVC 5.1. ISCSI is not used for storage attachment, for SVC cluster-to-cluster communication, or for communication between SVC engines in a cluster. This will still remain done via Fibre Channel. Details are described in 2.2.10, iSCSI Overview on page 26. Multiple relationships for synchronous data mirroring (Metro Mirror) Multiple Cluster Mirroring enables Metro Mirror and Global Mirror relationships to exist between a maximum of four SVC clusters. Keep in mind, a VDisk can be in only one MM/GM relationship. The creation of up to 8192 Metro Mirror and/or Global Mirror relationships is supported. The single relationships are individually controllable (create/delete, start/stop). Details are described in Synchronous/Asynchronous Remote Copy on page 32. Enhancements to FlashCopy, support for reverse FlashCopy Enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. Multiple targets and thus multiple rollback points are supported. Details are described in 2.2.16, FlashCopy on page 33. Zero Detection This provides the means to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient Virtual Disk (SEV) using VDisk Mirroring. To 56
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

migrate from a fully-allocated to Space-Efficient VDisk add the target space-efficient copy, wait for synchronization to complete, then remove the source fully-allocated copy. Details are described in 2.2.7, Mirrored VDisk on page 21. User authentication changes SVC 5.1 will support remote authentication and single sign on by using an external service running on the SSPC. The service providing this service will be the Tivoli Embedded Security Services (ESS) installed on the SSPC. Current local authentication methods will still be supported. Details are described in 2.3.5, User Authentication on page 40. RAS Enhancements As a complement to the existing SVC email and SNMP trap facilities, SVC adds Syslog error event logging for those using Syslog already in their configurations. This enables optional transmission over syslog interface to a remote syslog daemon when parsing the Error Event Log. The format and content of messages sent to a Syslog server are identical to the ones transmitted in a SNMP trap message.

2.7 Maximum supported configurations


For a list of the maximum supported configurations, visit the SVC support site at: http://www.ibm.com/storage/support/2145 Some of the limits have been lifted with SVC 5.1, but not all. The following list gives an overview on the most important ones. For details always consult the SVC support site: iSCSI support All host iSCSI names are converted to an internally generated WWPN (one per iSCSI name per I/O group). From this it follows that each iSCSI name in an I/O group consumes one WWPN that would otherwise be available for a real Fibre Channel WWPN. So, the limits for ports per I/O group / cluster / host object remain the same, but these limits are now shared between Fibre Channel WWPNs and iSCSI names. The number of cluster partnerships has been lifted from one up to a maximum of three partnerships. This means that a single SVC cluster can have partnerships of up to three clusters at the same time Remote Copy (RC) The number of RC relationships has been lifted from 1024 to 8192. Keep in mind, that a single VDisk at a single point of time can be member of exactly one single RC relationship. The number of RC relations per RC consistency group has also been lifted to 8192. VDisk A VDisk can contain a maximum of 217 (which is 131072) extents. With an extent size of 2GB we get a maximum VDisk size of 256TB.

2.8 Useful SVC Links


The SVC Support Page can be found at:

Chapter 2. IBM System Storage SAN Volume Controller overview

57

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&bra ndind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1 SVC online documentation can be found at: http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp IBM Redbooks on SVC at: http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC

2.9 Commonly encountered terms


Channel extender
A channel extender is a device for long distance communication connecting other SAN fabric components. Generally, these can involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or some other long distance communication protocol.

Cluster
A group of 2145 nodes that present a single configuration and service interface to the user.

Consistency Group
A group of Virtual Disks that have copy relationships that need to be managed as a single entity

Copied
Copied is a FlashCopy state that indicates that a copy has been triggered at some time after the copy relationship was created. The copy process is complete and the Target Disk has no further dependence on the Source Disk. The time of the last trigger event is normally displayed with this status.

Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide configuration and service functions over the network interface. This node is termed the configuration node. This configuration node manages a cache of the configuration information that describes the cluster configuration and provides a focal point for configuration commands. If the configuration node fails, another node in the cluster will assume the role.

Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN provides all the connectivity of the redundant SAN, but without the 100% redundancy. An SVC node is typically connected to a redundant SAN made out of two counterpart SANs. A counterpart SAN is often called a SAN fabric.

Error Code
A value used to identify an error condition to a user. This value might map to one or more Error IDs or to values presented on the service panel. This value is used to report error conditions to IBM and to provide an entry point into the service guide.

Error ID
A value used to identify a unique error condition detected by the 2145 cluster. An Error ID is used internally in the cluster to identify the error. 58
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Excluded
A status condition that describes a Managed Disk that the 2145 cluster has decided is no longer sufficiently reliable to be managed by the cluster. The user must issue a command to include the Managed Disk into the cluster managed storage.

Extent
A fixed size unit of data that is used to manage the mapping of data between Managed Disks and Virtual Disks.

Fibre Channel port logins


This is the number of hosts that can see any one SVC node port. Some disk subsystems, such as the IBM DS8000, recommend limiting the number of hosts that use each port, to prevent excessive queuing at that port. Clearly, if the port fails or the path to that port fails, the host might fail over to another port and the fan-in criteria might be exceeded in this degraded mode.

Front end and back end


The SAN Volume Controller takes managed disks and presents these to application servers (hosts). The managed disks are looked after by the back-end application of the SAN Volume Controller. The virtual disks presented to hosts are looked after by the front-end application in the SAN Volume Controller.

FRU
Field Replaceable Unit. The individual parts which are held as spares by the service organization.

Grain
A grain is the unit of data represented by a single bit in a FlashCopy bitmap (64 KB / 256 KB) in the SAN Volume Controller. It is also the unit to extend the real size of a space-efficient VDisk (32,64,128 or 256 KB).HBAHost Bus Adapter.In the context of Lodestone, this is an interface card which connects between a host bus, such as PCI and the SAN.

HBA
Host Bus Adapter.In the context of Lodestone, this is an interface card which connects between a host bus, such as PCI and the SAN.

Host ID
A numeric identifier assigned to a group of Host FC ports or iSCSI Host names for the purposes of LUN Mapping. For each Host ID there is a separate mapping of SCSI Ids to VDisks. It is intended that there be a one to one relationship between hosts and host IDs although this cannot be policed.

IQN (iSCSI Qualified Name)


Special names refer to both iSCSI initiators and targets. One of the three name formats iSCSI provides is IQN. Format: iqn.yyyy-mm.{reversed domain name}, e.g. the default for an SVC node is: iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

iSNS (Internet Storage Name Service)


The Internet Storage Name Service (iSNS) protocol allows automated discovery, management and configuration of iSCSI and Fibre Channel devices. It has been defined in RFC 4171.

Chapter 2. IBM System Storage SAN Volume Controller overview

59

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Image Mod
A configuration mode similar to Router mode but with the addition of cache/copy functions. SCSI commands are not forwarded directly to the Managed Disk.

I/O Group
A collection of VDisk and node relationships, i.e an SVC node pair that presents a common interface to host systems.Each SAN Volume Controller node is associated with exactly one I/O group. The two nodes in the I/O group provide access to the VDisks in the I/O group.

IISL hop
An interswitch link (ISL) is a connection between two switches, and is counted as an ISL hop. The number of hops is always counted on the shortest route between two N-ports (device connections). In an SVC environment, the number of ISL hops is counted on the shortest route between the pair of nodes farthest apart. It measures distance only in terms of ISLs in the fabric.

Local fabric
Since the SVC supports remote copy, there might be significant distances between the components in the local cluster and those in the remote cluster. The local fabric is composed of those SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the local cluster together.

Local and remote fabric interconnect


These are the SAN components that are used to connect the local and remote fabrics. They might simply be single mode optical fibers driven by high-power GBICs or SFPs, or more sophisticated components, such as channel extenders or special SFP modules used to extend the distance between SAN components.

LU and LUN
Formally defined by the SCSI standards as Logical Unit Number. Used as an abbreviation for an entity which exhibits disk like behavior. For example a VDisk or MDisk.

Managed disk (MDisk)


A SCSI Disk presented by a RAID controller and managed by the cluster. The Managed Disk is not visible to host systems on the SAN.

Managed disk group (MDiskgrp or MDG)


A collection of Managed Disks that jointly contain all the data for a specified set of Virtual Disks.

Managed Space Mode


This is a configuration mode similar to Image mode but with the addition of space management functions.

Master console (MC)


The master console is the platform on which the software used to manage the SAN Volume Controller runs. With Version 4.3. it is being replaced by SSPC. However, V4.3 GUI Console code is supported on existing master consoles.

60

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch02.SVC Overview Werner.fm

Node
A single processing unit, which provides virtualization, cache, and copy services for the SAN. SVC nodes are deployed in pairs called I/O groups. One node in the cluster is designated the configuration node.

Oversubscription
Oversubscription is the ratio of the sum of the traffic on the initiator N-port connection, or connections to the traffic on the most heavily loaded ISL(s) where more than one is used between these switches. This assumes a symmetrical network, and a specific workload applied evenly from all initiators and directed evenly to all targets. A symmetrical network means that all the initiators are connected at the same level, and all the controllers are connected at the same level.

Prepare
A configuration command that is used to cause cache data to be flushed in preparation for a copy trigger operation.

RAS
Reliability Availability and Serviceability.

RAID
Redundant Array of Independent Disks.

Redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so no matter what component fails, data traffic will continue. Connectivity between the devices within the SAN is maintained, albeit possibly with degraded performance, when an error has occurred. A redundant SAN design is normally achieved by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so even if one counterpart SAN is destroyed, the other counterpart SAN keeps functioning.

Remote fabric
Since the SVC supports remote copy, there might be significant distances between the components in the local cluster and those in the remote cluster. The remote fabric is composed of those SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the remote cluster together.

SAN
Storage Area Network.

SAN Volume Controller


The IBM System Storage SAN Volume Controller is a SAN based appliance designed for attachment to a variety of host computer systems, which carries out block level virtualization of disk storage.

SCSI
Small Computer Systems Interface.

SLP
The Service Location Protocol (SLP) is a service discovery protocol that allows computers and other devices to find services in a local area network without prior configuration. It has been defined in RFC 2608.
Chapter 2. IBM System Storage SAN Volume Controller overview

61

6423ch02.SVC Overview Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

SSPC
IBM System Storage Productivity Center (SSPC) replaces the master console for new installations of SAN Volume Controller Version 4.3.0. For SSPC planning, installation, and configuration information, see the following Web site: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

Virtual Disk (VDisk)


A virtual disk (VDisk) is a SAN Volume Controller device that appears to host systems attached to the SAN as a SCSI disk. Each VDisk is associated with exactly one I/O group.

62

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Chapter 3.

Planning and configuration


In this chapter, we describe the steps required when planning the installation of an IBM System Storage SAN Volume Controller (SVC) in your storage network. We look at the implications for your storage network, and discuss some performance considerations.

Copyright IBM Corp. 2009. All rights reserved.

63

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

3.1 General planning rules


To achieve the most benefit from SAN Volume Controller (SVC), pre-installation planning should include several important steps. These steps ensure that SVC provides the best possible performance, reliability, and ease of management for your application needs. Proper configuration also helps minimize downtime by avoiding changes to SVC and the storage area network (SAN) environment to meet future growth needs. Tip: The IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551, contains comprehensive information that goes into greater depth regarding the topics we discuss here. We go into much more depth about these topics in this redbook: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, available at: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open Planning the SVC requires that you follow these steps: 1. Collect and document the number of hosts (application servers) to attach to the SVC, the traffic profile activity (read or write, sequential or random), and the performance requirements (input/output (I/O) per second). 2. Collect and document the storage requirements and capacities. The total back-end storage already present in the environment to be provisioned on the SVC. The total back-end new storage to be provisioned on the SVC. The required virtual storage capacity used as a fully managed Virtual Disk and used as Space-Efficient Virtual Disk. The required storage capacity for local mirror copy (Virtual Disk Mirroring). The required storage capacity for point-in-time copy (FlashCopy). The required storage capacity for remote copy (Metro and Global Mirror). Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes. 3. Define the local and remote SAN fabrics and clusters, if a remote copy or a secondary site are needed. 4. Define the number of clusters and the number of pairs of nodes (between 1 and 4) for each cluster. Each pair of nodes (an I/O group) is the container for the virtual disks. How many I/O groups are needed depends on the overall performance requirements. 5. Design the SAN according to the requirement for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC, the SVC and the disk subsystem, between the SVC nodes, and for the ISL between the local and remote fabric. 6. Design the iSCSI network according to the requirements for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC. 7. Determine the IP addresses for the SVC cluster, and for the host connected via iSCSI connections. 8. Define a naming convention for SVC nodes, host and storage subsystem. 9. Define the managed disks (MDisks) in the disk subsystem.

64

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

10.Define the managed disk groups (MDGs). This depends on the disk subsystem in place and the data migration needs. 11.Create and re-partition the VDisks between the different I/O groups and the different MDGs in such a way as to optimize the I/O load between the hosts and the SVC. This can be an equal re-partition of all the VDisks between the different nodes, or a re-partition that takes into account the expected load from the different hosts. 12.Plan for the physical location of the equipment in the rack. 13.Determine the IP addresses for the SVC Cluster, the SVC service IP address, and SSPC (SVC console). 14. Define the number of FlashCopy required per hosts. 15.Define the cluster configuration backup and business data backup. SVC planning can be categorized into two different types: Physical planning Logical planning

3.2 Physical planning


There are several main factors to take into account when carrying out the physical planning of an SVC installation. The physical site must have the following characteristics: Power, cooling, and location requirements are present for the SVC and uninterruptible power supplies. SVC nodes and uninterruptible power supplies must be in the same rack as its associated SVC node. SVC nodes belonging to the same I/O group is suggested to be in different rack Plan for two different power sources if you have ordered a redundant AC power switch (available as an optional feature). An SVC node is one EIA unit high. Each of the uninterruptible power supplies (UPSs) that comes with SVC V5.1 is one EIA unit high; the UPS shipped with the earlier version of the SVC is two EIA units high. The SSPC (SVC console) is two EIA units high: one for the server and one for the keyboard and monitor. Other hardware devices can be in the same SVC rack, such as IBM System Storage DS4000, IBM System Storage DS6000, SAN switches, Ethernet switch, and others. Consider the maximum power rating of the rack; this must not be exceeded.

Chapter 3. Planning and configuration

65

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

In Figure 3-1, we show two 2145-CF8 SVC nodes.

Figure 3-1 2145-CF8 SVC nodes

3.2.1 Preparing your UPS environment


Ensure that your physical site meets the installation requirements for the uninterruptible power supply (UPS). Note: The 2145 UPS-1U is a Powerware 5115 and the 2145 UPS is a Powerware 5125.

2145 UPS-1U
The 2145 uninterruptible power supply-1U (2145 UPS-1U) is one EIA unit high and is shipped, and can only operate, on the following node types: SAN Volume Controller 2145-CF8 SAN Volume Controller 2145-8A4 SAN Volume Controller 2145-8G4 SAN Volume Controller 2145-8F2 SAN Volume Controller 2145-8F4 It was also shipped and will operate with the SAN Volume Controller 2145-4F2. When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 240 V, single phase. Note: The 2145 UPS-1U has an integrated circuit breaker and does not require external protection.

2145 UPS
The 2145 uninterruptible power supply (2145 UPS) is two EIA units high and was only shipped with the SAN Volume Controller 2145-4F2 prior to SVC V2.1. Be aware of the following considerations when configuring the 2145 uninterruptible power supply (2145 UPS): Each 2145 UPS must be connected to a separate branch circuit. A UL-listed 15 A circuit breaker must be installed in each branch circuit that supplies power to the 2145 UPS.

66

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

The voltage that is supplied to the 2145 UPS must be single phase 200 240 V with a supplied frequency at 50 or 60 Hz.

Heat output
The maximum heat output parameters are different depending by which SVC node models are connected. In order to get updated heat output values (if any) refer to the IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551.

3.2.2 Physical rules


The SVC must be installed in pairs to provide high availability, and each node in an I/O group must be connected to different UPSs. Figure 3-2 shows an example of power connections for the 2145-8G4.

Figure 3-2 Node uninterruptible power supply setup

In SVC versions prior to SVC V2.1, the Powerware 5125 UPS was shipped with the SVC; in SVC V4.2, Powerware 5115 UPS is shipped with the SVC. You can upgrade an existing SVC cluster to V4.3.1.x and still use the UPS Powerware 5125 that was delivered with the SVC prior to V2.1. Each SVC node of an I/O group must be connected to a different UPS. Each UPS shipped with SVC V3.1, V4.1, V4.2, and V4.3 supports one node only, but each UPS in earlier versions of SVC supports up to two SVC nodes (in distinct I/O groups). Each UPS pair that supports a pair of nodes must be connected to a different power domain (if possible) to reduce the chances of input power loss. The UPSs, for safety reasons should be installed in the lowest positions in the rack. If necessary, move lighter units towards the top of the rack to make way for them. The power and serial connection from a node must be connected to the same UPS, otherwise the node will not boot.

Chapter 3. Planning and configuration

67

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

The 5115 and 5125 UPS can be mixed with UPSs that were supplied with earlier SVC versions, but the UPS rules above have to be followed, and SVC nodes in the same I/O group must be attached to the same type of UPSs, though not the same UPS. 2145-CF8, 2145-8A4, 2145-8G4, 2145-8F2, and 2145-8F4 hardware models must be connected to a 5115 UPS. They will not boot with a 5125 UPS. Important: Do not share the SVC UPS with any other devices. Figure 3-3 shows ports for the 2145-CF8.

Figure 3-3 Ports

Figure 3-4 on page 69 shows a power cabling example for the 2145-CF8.

68

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-4 2145-CF8 power cabling

There are some guidelines to follow for FC cable connections. Occasionally the introduction of new SVC hardware model means that there are internal changes. One example is the WWPN mapping in the port mapping. The 2145-8G4 and 2145-CF8 have the same mapping. Figure 3-5 on page 70 shows the WWPN mapping.

Chapter 3. Planning and configuration

69

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 3-5 WWPN mapping

Figure 3-6 shows a sample layout within a different rack.

70

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-6 Sample rack layout

We suggest that you place the racks in different rooms, if possible, in order to obtain protection against critical events (fire, water, power loss, and so on) that may effect one room only. Bear in mind the maximum distances supported between the nodes in one I/O group (100 meters). This distance can be extended by submitting a formal SCORE request to increase the limit by following the rules that will be specified in any SCORE approval.

3.2.3 Cable connections


Create a cable connection table or documentation following your environments the documentation procedure to track all of the connections required for the setup: Nodes UPS Ethernet iSCSI connections Fibre Channel ports SSPC (SVC Console)

3.3 Logical planning


For logical planning we intending to cover: Management IP addressing plan
Chapter 3. Planning and configuration

71

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

SAN zoning and SAN connections iSCSI IP addressing plan Back end Storage subsystem configuration SVC cluster configuration Managed disk group configuration Virtual Disk configuration Host mapping (LUN masking) Advanced copy functions SAN boot support Data migration from non-virtualized storage subsystems SVC configuration back-up procedure

3.3.1 Management IP addressing plan


For management bear in mind the following: In addition to a Fibre Channel connection, each node has an Ethernet connection for configuration and error reporting. Each SVC cluster needs at least two IP addresses. The first IP address is used for management and the second IP address is used for service. The service IP address will become usable only when the SVC cluster is in service mode, and be aware that this is a disruptive operation. Both IP addresses should be in the same IP subnet.
Example 3-1 Management ip address sample management IP add. 10.11.12.120 service IP add. 10.11.12.121

Each node in an SVC cluster needs to have at least one ethernet connection IBM supports the option of having multiple console access, using the traditional SVC HMC or the SSPC console. Multiple master consoles or SSPC consoles can access a single cluster, but when multiple master consoles access one cluster, you cannot concurrently perform configuration and service tasks. The master console can be supplied on either pre-installed hardware, or just software supplied to and subsequently installed by the user. With SVC 5.1 the cluster configuration node may now be accessed on both Ethernet ports, and this means the cluster may have two IPv4 and two IPv6 addresses that are used for configuration purposes. Figure 3-7 on page 73 shows the IP configuration possibilities.

72

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-7 IP configuration possibility

The cluster may therefore be managed by SSPCs on separate networks this provides redundancy in the event of a failure of one of these networks. Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each Ethernet port on every node these IP addresses are independent of the cluster configuration ip addresses. The CLI commands for managing the cluster IP addresses have therefore been moved from svctask chcluster to svctask chclusterip in SVC 5.1 and new commands introduced to manage the iSCSI IP addresses. When connecting to the SVC with SSH, choose one of the available IP addresses to connect to. There is no automatic failover capability, so if one network is down, use the other IP address. Customers may be able to use intelligence in Domain Name Servers (DNS) to provide some failover. When using the GUI customers can add the cluster to the SVC Console multiple times (once per IP address). Failover is achieved by using the functional IP address when launching the SVC Console interface.

Chapter 3. Planning and configuration

73

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

3.3.2 SAN zoning and SAN connections


SAN storage systems using the SVC can be configured with two, or up to eight, SVC nodes, arranged in an SVC cluster. These are attached to the SAN fabric, along with disk subsystems and host systems. The SAN fabric is zoned to allow the SVCs to see each others nodes and the disk subsystems, and for the hosts to see the SVCs. The hosts are not able to directly see or operate LUNs on the disk subsystems that are assigned to the SVC cluster. The SVC nodes within an SVC cluster must be able to see each other and all the storage assigned to the SVC cluster. The zoning capabilities of the SAN switch are used to create these distinct zones. SVC 5.1 supports 2 Gbps, 4 Gbps or 8 Gbps Fibre Channel fabric. This depends on the hardware platform and on the switch where the SVC is connected. We recommend connecting the SVC and the disk subsystem to the switch operating at the highest speed, in an environment where you have a fabric with multiple speed switches. All SVC nodes in the SVC cluster are connected to the same SANs, and present virtual disks to the hosts. These virtual disks are created from managed disks group composed by managed disk presented by the disk subsystems. There must be three basic distinct zones in the fabric: SVC cluster zone: create one zone per fabric with all the SVC ports cabled to this fabric to allow SVC intracluster node communication. Host zones: create a SVC host zone for each server that receive storage from the SVC cluster. Storage zone: create one SVC storage zone for each storage subsystem virtualized by the SVC. In addition, if a DR solution with two SVC clusters is be implemented then one more zone has to be defined in the fabric: Remote Mirror Zone: create one zone per fabric with all the SVC ports belonging to all the SVC clusters that will be configured in a remote copy partnership. Figure 3-8 shows the SVC zoning topology.

74

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-8 SVC zoning topology

Figure 3-9 shows an example of SVC, host and Storage Subsystem connections.

Figure 3-9 Example of SVC, host, storage subsystem connections

Chapter 3. Planning and configuration

75

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

The following guidelines must be also applied: Hosts are not permitted to operate on the disk subsystem LUNs directly if the LUNs are assigned to the SVC. All data transfer happens through the SVC nodes. Under some circumstances, a disk subsystem can present LUNs to both the SVC (as managed disks, which it then virtualizes to hosts) and to other hosts in the SAN. Mixed speeds are permitted within the fabric, but not for intracluster communication. You can use lower speeds to extend the distance. Uniform SVC port speed for 2145-4F2 and 2145-8F2 nodes: The optical fiber connections between Fibre Channel switches and all 2145-4F2 or 2145-8F2 SVC nodes in a cluster must run at one speed, either 1 Gbps or 2 Gbps. 2145-4F2 or 2145-8F2 nodes with different speeds running on the node to switch connections in a single cluster is an unsupported configuration (and is impossible to configure anyway). This rule does not apply to 2145-8F4, 2145-8G4, 2145-8A4 and 2145-CF8 nodes because the Fibre Channel ports on these nodes auto-negotiate their speed independently of one another and can run at 2 Gbps, 4 Gbps or 8 Gbps. Each of the local or remote fabrics should not contain more than three ISL hops within each fabric. An operation with more ISLs is unsupported. When a local and a remote fabric are connected together for remote copy purposes, there should only be one ISL hop between the two SVC clusters. This means that some ISLs can be used in a cascaded switch link between local and remote clusters, provided that the local and remote cluster internal ISL count is less than three. This gives a maximum of seven ISL hops in an SVC environment with both local and remote fabrics. The switch configuration in an SVC fabric must comply with the switch manufacturers configuration rules. This can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside the switch manufacturers rules is not supported. The SAN contains only supported switches, operation with other switches is unsupported. Host HBAs in dissimilar hosts or dissimilar HBAs in the same host need to be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in separate zones. Here, dissimilar means that the hosts are running different operating systems or use different hardware platforms. Therefore, different levels of the same operating system are regarded as similar. This is a SAN interoperability issue rather than an SVC requirement. We recommend that the host zones contain only one initiator (HBA) each, and as many SVC node ports as you need, depending on the high availability and performance you want to have from your configuration. Note: In SVC Version 3.1 and later, the command svcinfo lsfabric generates a report that displays the connectivity between nodes and other controllers and hosts. This is particularly helpful in diagnosing SAN problems.

Zoning examples
Figure on page 77 shows an SVC cluster zoning example.

76

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-10 SVC cluster zoning example

Figure 3-11 shows a storage subsystem zoning example.

Figure 3-11 Storage Subsystem zoning example

Figure 3-12 on page 78 shows a host zoning example.

Chapter 3. Planning and configuration

77

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 3-12 Host zoning example

3.3.3 iSCSI IP addressing plan


SVC 5.1 supports host access via iSCSI (as an alternative to Fibre Channel) and the following guidelines apply: SVC uses the built-in Ethernet ports for iSCSI traffic. All node types which can run SVC 5.1 can use the iSCSI feature. SVC supports the CHAP authentication methods for iSCSI. iSCSI IP addresses can fail over to the partner node in the IO group if a node fails. This reduces the need for multipathing support in the iSCSI host. Configure iSCSI IP addresses for one or more nodes. Also configure the iSCSI Simple Name Server (ISNS) address in the SVC. The iSCSI Qualified Name (IQN) for an SVC node will be: iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the cluster name and node name it is important not to change these after iSCSI is deployed. Each node can be given an iSCSI alias, as an alternative to the IQN. Add the IQN of the host to an SVC host object in the same way that you add Fibre Channel WWPNs. Host objects can have both WWPNs and IQNs, but this would not be considered normal best practices unless the IQNs are on different servers than the WWPNs. Use standard iSCSI host connection procedures to discover and configure SVC as an iSCSI target. Following we show some of the ways that SVC 5.1 can be configured. Figure 3-13 shows the use of IPv4 management and iSCSI addresses in the same subnet.

78

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-13 Use of IPv4 addresses

The equivalent configuration can be setup with just IPv6 addresses. Figure 3-14 on page 79 shows the use of IPv4 management and iSCSI addresses in two different subnets.

Figure 3-14 IPv4 address plan with two subnet

Figure 3-15 shows the use of redundant networks.

Chapter 3. Planning and configuration

79

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 3-15 Redundant network

Figure 3-16 on page 80 shows the use of a redundant network and a third subnet for management.

Figure 3-16 Redundant network with third subnet for management

Figure 3-17 shows the use of a redundant network for both iSCSI data and management.

80

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-17 redundant network for iSCSI and Management

Be aware that: All the examples are valid using IPv4 and IPv6 addresses It is valid to use IPv4 addresses on one port and IPv6 addresses on the other It is valid to have different subnet configurations for IPv4 and IPv6 addresses

3.3.4 Back-end storage subsystem configuration


Back-end storage subsystem configuration planning must be applied to all the storage that will supply disk space to a SVC cluster. For the currently supported storage subsystems refer to the following web site. http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Back end Storage Subsystem configuration planning apply the following general guidelines: In the SAN, disk subsystems used by the SVC cluster are always connected to SAN switches and nothing else. Other disk subsystem connections out of the SAN are possible. Multiple connections are allowed from the redundant controllers in the disk subsystem to improve data bandwidth performance. It is not mandatory to have a connection from each redundant controller in the disk subsystem to each counterpart SAN but is recommended. This means that controller A in the DS4000 could be connected to SAN A only, or to SAN A and SAN B, and controller B in the DS4000 could be connected to SAN B only, or to SAN B and SAN A. Split controller configurations are supported with certain rules and configuration guidelines. Refer to the IBM System Storage SAN Volume Controller Planning Guide, GA32-0551 for more information. All SVC nodes in an SVC cluster must be able to see the same set of disk subsystem ports on each disk subsystem controller. Operation in a mode where two nodes see a 81

Chapter 3. Planning and configuration

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

different set of ports on the same controller becomes degraded. This can occur if inappropriate zoning was applied to the fabric. It can also occur if inappropriate LUN masking is used. This has important implications for a disk subsystem, such as DS3000, DS4000 or DS5000 which imposes exclusivity rules on which HBA world wide names (WWNs) a storage partition can be mapped to. In general, configure disk subsystems as you would without the SVC, but the following specific guidelines are suggested: Disk drives: Be careful with large disk drives and that you do not end up with too few spindles to handle load RAID-5 is suggested, but RAID-10 is viable and useful Array sizes: 8+P or 4+P is recommended for the DS4K/5K family if possible Use the DS4K segment size of 128KB or larger to help sequential performance Avoid SATA disk unless running SVC 4.2.1.x or later Upgrade to EXP810 drawers if possible Create LUN sizes equal to RAID array/rank if it does not exceed 2TB Create a minimum of one LUN per Fibre Channel port on a disk controller zoned with the SVC When adding more disks to a subsystem consider adding the new MDisks to existing MDGs versus creating additional small MDGs Use a Perl script to re-stripe VDisk extents evenly across all MDisks in MDG. Go to: http://www.ibm.com/alphaworks and search using svctools Maximum of 64 WWNNs: EMC DMX/SYMM, All HDS and SUN/HP HDS clones use one WWNN per port; each appears as a separate controller to the SVC Upgrade to SVC 4.2.1 or later so you can map LUNs through up to 16 FC ports; this results in 16 WWNNs/WWPNs used out of the maximum of 64 IBM, EMC Clariion, HP, use one WWNN per subsystem; each appears as a single controller with multiple ports/WWPNs, maximum of 16 ports/WWPNs per WWNN using one out of the maximum of 64 DS8K using four or eight 4 port HA cards: Use port 1 and 3 or 2 and 4 on each card Provides 8 or 16 ports for SVC use Use 8 ports minimum up to 40 ranks Use 16 ports, the maximum, for 40+ ranks Upgrade to SVC 4.2.1.9 or later to drive more workload to DS8K Increased queue depth for DS4K/5K DS6K, DS8K, EMC DMX DS4K/5K EMC Clariion/CX: Both have the preferred controller architecture and SVC honors this configuration Use minimum of 4 and preferably 8 ports or more up to maximum of 16 so more ports equate to more concurrent I/O driven by the SVC

82

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross connecting ports to both fabrics from both controllers. The latter is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail. Upgrade to SVC 4.3.1 or later for an SVC queue depth change for CX models because it drives more I/O per port per MDisk DS3400: Use a minimum of 4 ports Upgrade to SVC 4.3.x or later for better resiliency if the DS3400 controllers reset XIV requirements and/or restrictions SVC cluster must be running version 4.3.0.1 or later to support the XIV Use of some XIV functions on LUNs presented to the SVC is not supported Cannot do snaps, thin provisioning, synchronous replication, LUN expansion on XIV MDisks Maximum of 511 LUNs from one XIV system can be mapped to a SVC cluster Full 15 Module XIV Recommendations 79TB Usable Use two interface host ports from each of the 6 interface modules Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC node ports Create 48 LUNs of equal size each a multiple of 17 GB and you will get 1632 GB approximately if using the entire full frame XIV with the SVC Map LUNs to the SVC as 48 MDisks and add all of them to the one XIV MDG so SVC will drive the I/O to 4 MDisks/LUNs for each of the 12 XIV Fibre Channel ports. This provides a good queue depth on the SVC to drive XIV adequately Six Module XIV Recommendations 27 TB Usable: Use 2 interface host ports from each of the 2 active interface modules Use ports 1 and 3 from interface modules 4 and 5 (interface module 6 is inactive) and zone these 4 ports with all SVC node ports Create 16 LUNs of equal size each a multiple of 17GB and you will get 1632GB approximately if using entire XIV with SVC Map LUNs to SVC as 16 MDisks and add all of them to the one XIV MDG so SVC will drive I/O to 4 MDisks/LUNs per each of the 4 XIV fibre ports. This provides good queue depth on SVC to drive XIV adequately Nine Module XIV Recommendations 43TB Usable Use two interface host ports from each of the 4 active interface modules Use ports 1 and 3 from interface modules 4,5,7,8 (Interface modules 6 and 9 are inactive) and zone these 8 ports with all SVC node ports Create 26 LUNs of equal size each a multiple of 17GB and you will get 1632GB approximately if using entire XIV with SVC Map LUNs to SVC as 26 MDisks and add all of them to the one XIV MDG so SVC will drive I/O to 3 MDisks/LUNs on each of 6 ports and 4 on the other 2 XIV fibre ports. This provides good queue depth on SVC to drive XIV adequately Configuring XIV host connectivity for the SVC cluster Create one host definition on XIV and include all SVC node WWPNs Can create clustered host definitions - 1 per I/O group - but the preceding is easier
Chapter 3. Planning and configuration

83

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Map all LUNs to all SVC node WWPNs

3.3.5 SVC cluster configuration


To ensure high availability in SVC installations, keep the following considerations in mind when you design a SAN with the SVC. An SVC node, in this case, 2145-4F2 and 2145-8F2, always contains two host bus adapters (HBAs), each of which has two Fibre Channel (FC) ports. If an HBA fails, this remains a valid configuration, and the node operates in degraded mode. If an HBA is physically removed from an SVC node, then the configuration is unsupported. The 2145-CF8, 2145-8A4, 2145-8G4 and 2145-8F4 has one HBA with four ports. All nodes in a cluster must be in the same LAN segment. This is because the nodes in the cluster must be able to assume the same cluster, or service IP, address. Make sure that the network configuration will allow any of the nodes to use these IP addresses. To maintain application uptime in the unlikely event of an individual SVC node failing, SVC nodes are always deployed in pairs (I/O groups). If a node fails or is removed from the configuration, the remaining node operates in a degraded mode, but is still a valid configuration. The remaining node operates in write through mode, meaning the data is written directly to the disk subsystem (the cache is disabled for the write). The UPS must be in the same rack as the node it supplies, and each UPS can only have one node connected. The Fibre Channel SAN connections between the SVC node and the switches are optical fiber. These connections can run at either 2 Gbps, or 4 Gbps, depending on your SVC and switch hardware. The 2145-CF8, 2145-8A4, 2145-8G4 and 2145-8F4 SVC nodes auto-negotiate the connection speed with the switch. The 2145-4F2 and 2145-8F2 nodes are capable of a maximum of 2 Gbps, which is determined by the cluster speed. SVC node ports must be connected to the Fibre Channel fabric only. Direct connections between SVC and host, or disk subsystem, are unsupported. Two SVC clusters cannot share the same LUNs in a subsystem. The consequences of sharing the same disk subsystem can result in data loss. If the same MDisk becomes visible on two different SVC clusters, then this is an error that can cause data corruption. The two nodes within an I/O group can be co-located, (within the same set of racks), or can be located in different rack and different rooms in order to deploy a very basic business continuity solution. If a split nodes cluster (split I/O group) solution is implemented, take care with respect to the maximum distance allowed (100m) between the nodes in an I/O group, otherwise you will require a SCORE request in order to be supported for longer distances. Ask your IBM service representative for more detailed information about the SCORE process. If a split node cluster (split I/O group) solution is implemented, we recommend using a business continuity solution for the storage subsystem using the VDisk mirroring option and taking care with respect to the SVC cluster quorum disk collocation as shown in Figure 3-18 where the quorum disk is located apart in a third site or room. The SVC uses three managed disks as a quorum disks for the cluster. It is recommended for redundancy purposes to collocate, if possible, the three managed disks in three different storage subsystems. If a split node cluster (split I/O group) solution is implemented, two of the three quorum disk can be co-located in the same room where the SVC nodes are located, but the active quorum disk (quorum index 0) must be in a separate room. Figure 3-18 shows a schematic split I/O group solution.

84

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Figure 3-18 Ssplit I/O group solution

3.3.6 Managed Disk Group configuration


The managed disk group (MDG) is at the center of the many-to-many relationship between managed disks and virtual disks. It acts as a container into which managed disks contribute chunks of disk blocks, known as extents, and from which virtual disks consume these extents of storage. MDisks in the SVC are LUNs assigned from the underlying disk subsystems to the SVC, and can be either managed or unmanaged. A managed MDisk is an MDisk assigned to an MDG. MDGs are collections of managed disks. A managed disk is contained within exactly one MDG. An SVC supports up to 128 MDGs. There is no limit to the number of virtual disks that can be in an MDG other than the limit per cluster. MDGs are collections of virtual disks. Under normal circumstances, a virtual disk is associated with exactly one MDG. The exception to this is when a virtual disk is migrated, or mirrored, between MDGs. SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1024, and 2048 MB. The extent size is a property of the MDG, which is set when the MDG is created. It cannot be changed and all managed disks, which are contained in the MDG, have the same extent size, so all virtual disks associated with the MDG must also have the same extent size. Table 3-1 on page 85 shows all the extent size available in a SVC.
Table 3-1 Extent size and maximum cluster capacities Extent size 16 MB Maximum cluster capacity 64 TB

Chapter 3. Planning and configuration

85

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Extent size 32 MB 64 MB 128 MB 256 MB 512 MB 1024 MB 2048 MB

Maximum cluster capacity 128 TB 256 TB 512 TB 1 PB 2 PB 4 PB 8 PB

Maximum cluster capacity is related to the extent size 16MB extent = 64TB and doubles for each increment in extent size e.g. 32MB = 128TB, we strongly recommend minimum 128/256MB. SPC benchmarks used 256MB extent. Pick the extent size and use for all MDGs Cannot migrate VDisks between MDGs with different extent sizes MDG reliability, availability, and serviceability (RAS) considerations: It may make sense to create multiple MDGs if you ensure a host only gets its VDisks built from one of the MDGs. If the MDG goes offline then it impacts only a subset of all the hosts using the SVC but this could cause a high number of MDG, close to the SVC limits. If you are not going to isolate hosts to MDGs then create one large MDG. This assumes the physical disks are all the same size and speed and same RAID level. MDG goes offline if a MDisk is unavailable even if it has no data on it. Do not put MDisks into an MDG until needed. Create at least one separate MDG for all the image mode virtual disks. Make sure that the LUNs given to the SVC have any host persistent reserves removed. MDG performance considerations: It may make sense to create multiple MDGs if attempting to isolate workloads to different disk spindles. MDGs with too few MDisks cause an MDisks overload, so it is better to have more spindle counts in a MDG to meet workload requirements MDG and SVC cache relationship SVC 4.2.1 first introduced cache partitioning to the SVC code base. The decision was made to provide flexible partitioning, rather than hard coding a specific number of partitions. This flexibility is provided on a Managed Disk Group (MDG) boundary. That is, the cache will automatically partition the available resources on a per Managed Disk Group basis. Most users create a single Managed Disk Group from the LUNs provided by a single disk controller, or a subset of a controller/collection of the same controllers, based on the characteristics of the LUNs themselves. For example, RAID-5 vs. RAID-10, 10K RPM vs. 15K RPM, and so on. The overall strategy is provided to protect from individual controller overloading or faults. If many controllers (or in this case Managed Disk Groups) are overloaded then the overreached may still suffer. Table 3-2 shows limit of the write cache data

86

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Table 3-2 Limit of the cache data Number of MDG 1 2 3 4 5 or more Upper Limit 100% 66% 40% 30% 25%

You can think of the rule as being that no single partition can occupy more than its upper limit of cache capacity with write data.These are upper limits, and are the point at which the SVC cache will start to limit incoming I/O rates for Virtual Disks (VDisks) created from the Managed Disk Group. If a particular partition reaches this upper limit, the net result is the same as a global cache resource that is full. That is, the host writes will be serviced on a one-out-one-in basis as the cache de-stages writes to the back-end disks. However, only writes targeted at the full partition are limited, all I/O destined for other (non-limited) Managed Disk Groups will continue as normal. Read I/O requests for the limited partition will also continue as normal - however since the SVC is destaging write data at a rate that is obviously greater than the controller can actually sustain (otherwise the partition would not have reached the upper limit) - reads are likely to be serviced equally slowly.

3.3.7 Virtual Disk configuration


An individual virtual disk is a member of one MDG and one I/O group. When you want to create a VDisk, first of all you have to know what this VDisk will be created for. Based on that you can decide which MDG you have to select in order to fit your requirements in terms of cost, performance and availability. The MDG defines which managed disks provided by the disk subsystem make up the virtual disk. The I/O group (two nodes make an I/O group) defines which SVC nodes provide I/O access to the virtual disk. Note: There is no fixed relationship between I/O groups and MDGs. Therefore, you could define the virtual disks using the following considerations: Optimize the performance between the hosts and SVC by distributing the VDisks between the different nodes of the SVC cluster. This means spreading the load equally on the nodes in the SVC cluster. Get the level of performance, reliability, and capacity you require by using the MDG that corresponds to your needs (you can access any MDG from any node), that is, choose the MDG that fulfils the demands for your VDisk, with respect to performance, reliability, and capacity. I/O group considerations. When you create a VDisk, it is associated to one node of an I/O group. By default, every time you create a new VDisk, it is associated to the next node using a round robin algorithm. You can specify a preferred access node. This is the node through which you send I/O to the VDisk instead of using the round robin algorithm. A virtual disk is defined for an I/O group, which provides the following benefits:
Chapter 3. Planning and configuration

87

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Even if you have eight paths for each virtual disk, all I/O traffic flows only towards one node (the preferred node). Therefore, only four paths are really used by SDD. The other four are used only in case of a failure of the preferred node or when concurrent code upgrade (CCU) is running. Creating image mode virtual disks Use image mode virtual disks when a managed disk already has data on it, from a non-virtualized disk subsystem. When an image mode virtual disk is created, it directly corresponds to the managed disk from which it is created. Therefore, virtual disk LBA x = managed disk LBA x. The capacity of image mode VDisks defaults to the capacity of the supplied MDisk. When you create an image mode disk, the managed disk must have a mode of unmanaged and therefore does not belong to any MDG. A capacity of 0 is not allowed. Image mode virtual disks can be created in sizes with a minimum granularity of 512 bytes, and must be at least one block (512 bytes) in size. Creating managed mode virtual disks with sequential or striped policy When creating a managed mode virtual disk with sequential or striped policy, you must use a number of managed disk containing free extents that are free and of a size that is equal or greater than the size of the virtual disk you want to create. It may be the case that there are sufficient extents available on the managed disk, but that there is no contiguous block large enough to satisfy the request. Space-efficient virtual disk considerations While creating the space-efficient volume, it is necessary to understand the utilization patterns by the applications or group users accessing this volume. Items such as the actual size of the data, rate of creation of new data, modifying or deletion of existing data, and so on all need to be taken into consideration. There are two operating modes for Space-Efficient VDisks, Auto-Expand VDisks that allocate storage from a managed disk group on demand with minimal user intervention required, but a misbehaving application can cause a VDisk to expand until it has consumed all the storage in a managed disk group, and, non Auto-Expand VDisks where a fixed amount of storage is assigned. In this case user must monitor VDisk and assign additional capacity if/when required. a misbehaving application can only cause the VDisk it is using to fill up Depending on the initial size for the real capacity, the grain size and warning level can be set. If a disk goes offline, either through a lack of available physical storage on auto-expand, or because a disk marked as non-expand has not been expanded, then there is a danger of data being left in the cache until some storage is made available. This is not a data integrity or data loss issue, but you should not be relying on the SVC cache as a backup storage mechanism. Recommendations We highly recommend to keeping a warning level on the used capacity such that it provides adequate time for provision of more physical storage. Warnings should not be ignored by an administrator. Use the auto-expand feature of Space-efficient VDisks. The grain size allocation unit for the real capacity in the VDisk can be set as 32 KB, 64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it results in a larger directory map, which may reduce performance.

88

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Space-efficient VDisks require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write a Space-Efficient VDisk will require approximately 1 directory I/O for every user I/O so performance could be up to 50% less of a normal VDisk The directory is 2-way write-back cached (just like the SVC fastwrite cache) so some applications will perform better Space-Efficient VDisks require more CPU processing so the performance per I/O group will be lower Starting with SVC 5.1 we have Space-Efficient VDisks - zero detect. This feature enables customers to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient Virtual Disk (SEV) using VDisk Mirroring. VDisk Mirroring. If you are planning to use the VDisk mirroring option the following guidelines must be applied: Create or identify two different MDGs where to allocate space for your mirrored VDisk If it is possible use MDG with MDisks that share the same characteristics, otherwise the VDisk performance may be effected by the lower performance MDisk.

3.3.8 Host mapping (LUN masking)


For the host and application servers the following guidelines apply: Each SVC node presents a VDisk to the SAN through four paths. Since in normal operation two nodes are used to provide redundant paths to the same storage, this means that a host with two HBAs can see eight paths to each LUN presented by the SVC. We suggest using zoning to limit the pathing from a minimum of two paths to the maximum available of eight paths, depending on the kind of high availability and performance you want to have in your configuration. We recommend using zoning to limit the pathing to four paths. The hosts must run a multipathing device driver to resolve this back to a single device. The multipathing driver supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native multipath I/O (MPIO) drivers on selected hosts are supported. For operating system specific information about MPIO support, see: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html The number of paths to a VDisk from a host to the nodes in the I/O group that owns the VDisk must not exceed eight, even if this is not the maximum number of paths supported by the multi-path driver (SDD supports up to 32). To restrict the number of paths to a host VDisk, the fabric(s) should be zoned such that each host Fibre Channel port is zoned with one port from each SVC node in the I/O group that owns the VDisk. Note: The recommended number of VDisk paths is 4. If a host has multiple HBA ports, then each port should be zoned to a different set of SVC ports to maximize high availability and performance. In order to configure greater than 256 hosts, you will need to configure the host to iogrp mappings on the SVC. Each iogrp can contain a maximum of 256 hosts, so it is possible to create 1024 host objects on an eight node SVC cluster. VDisks can only be mapped to a host that is associated with the I/O Group to which the VDisk belongs. Port masking. You can use a port mask to control the node target ports that a host can access. This satisfies two requirements:

Chapter 3. Planning and configuration

89

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

As part of a security policy, to limit the set of WWPNs that are able to obtain access to any VDisks through a given SVC port. As part of a scheme to limit the number of logins with mapped VDisks visible to a host multi-pathing driver (like SDD) and thus limit the number of host objects configured without resorting to switch zoning. The port mask is an optional parameter of the svctask mkhost and chhost commands. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111 (all ports enabled). The SVC supports connection to the Cisco MDS family and Brocade family. See the following Web site for the latest support information: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

3.3.9 Advanced Copy Services


The Advanced Copy Services (ACS) in the SVC are: FlashCopy Metro Mirror Global Mirror SVC Advanced Copy Services must apply the following guidelines: FlashCopy guidelines: Identify which applications need to have a FlashCopy function implemented for its VDisk. FlashCopy is a relationship between VDisks. Those VDisk could belong to different MDG and different storage subsystems. FlashCopy could be used for back-up purposes by interacting with TSM Agent, or for cloning a particular environment. Define which FlashCopy best fits your requirements: NO copy, Full copy, Space Efficient, or Incremental. Define which FlashCopy rate best fits your requirement in terms of performance and time to get the FlashCopy completed. The relationship of the background copy rate value to the attempted number of grains to be split per second is shown in Table 3-3. Define the grain size you want to use. Bigger grain sizes may cause a longer FlashCopy elapsed time and a higher space usage in the FlashCopy target VDisk. Lower grain sizes may have the opposite effect. Keep in mind that the data structure and/or the source data location can modify those effect. In a real environment you should check what you are getting from your FlashCopy procedure in terms of data copied at every run and in terms of elapsed time, comparing them to the new SVC FlashCopy results and eventually adapt the grain/sec and the copy rate parameter in order to fit your environment requirements.
Table 3-3 Shows the grain splits per second User percentage 1-10 11-20 21-30 Data Copied/second 128KB 256KB 512KB 256KB grain/second 0.5 1 2 64KB grain/second 2 4 8

90

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

User percentage 31-40 41-50 51-60 61-70 71-80 81-90 91-100

Data Copied/second 1MB 2 MB 4 MB 8 MB 16 Mb 32 MB 64 MB

256KB grain/second 4 8 16 32 64 128 256

64KB grain/second 16 32 64 128 256 512 1024

Metro Mirror and Global Mirror guidelines


SVC supports both intra-cluster and inter-cluster Metro Mirror and Global Mirror. From the intra-cluster point of view, any single cluster is a reasonable candidate for Metro Mirror or Global Mirror operation. Inter-cluster operation on the other hand will need a pair of clusters, separated by a number of moderately high bandwidth links. Figure 3-19 shows a schematic of Metro Mirror connections.

Figure 3-19 Metro Mirror connections

Figure 3-19 contains two redundant fabrics. Part of each fabric exists at the local and remote cluster. There is no direct connection between the two fabrics. Technologies for extending the distance between two SVC clusters can be broadly divided into two categories: Fibre Channel Extenders SAN multiprotocol routers

Chapter 3. Planning and configuration

91

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Due to the more complex interactions involved, IBM explicitly tests products of this class for interoperability with the SVC. The current list of supported SAN routers can be found in the supported hardware list on the SVC support web site at:
http://www.ibm.com/storage/support/2145

IBM has tested a number of Fibre Channel extenders and SAN router technologies with the SVC. These must be planned, installed and tested such that the following requirements are met: For SVC 4.1.0.x, the round-trip latency between sites must not exceed 68ms (34ms one way) for fibre channel extenders, or 20ms (10ms one-way) for SAN routers. For SVC 4.1.1.x and later, the round-trip latency between sites must not exceed 80ms (40ms one-way). For Global Mirror this would allow a distance of between primary and secondary sites of up to 8000 km using a planning assumption of 100km per 1ms of round trip link latency. The latency of long distance links depends upon the technology used to implement them. A point to point dark fiber-based link will typically provide a round trip latency of 1ms per 100km or better, other technologies will provide longer round trip latencies and this will affect the maximum supported distance. The configuration must be tested with the expected peak workloads. When Metro Mirror or Global Mirror is being used, a certain amount of bandwidth will be required for SVC inter-cluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clusters. Figure 3-20 shows the amount of heartbeat traffic, in megabits per second, generated by different sizes of cluster.

Figure 3-20 Amount of heartbeat traffic

These numbers represent the total traffic between the two clusters, when no I/O is taking place to mirrored VDisks. Half of the data is sent by one cluster, and half by the other cluster. The traffic will be divided evenly over all available inter-cluster links; therefore, if you have two redundant links, half of this traffic will be sent over each link, during fault free operation. The bandwidth between sites must be at the very least sized to meet the peak workload requirements while maintaining the maximum latency specified above. The peak workload requirement must be evaluated by considering the average write workload over a period of one minute or less plus the required synchronization copy bandwidth. With no synchronization copies active and no write IO disks in Metro Mirror or Global Mirror relationships, the SVC protocols will operate with the bandwidth indicated in the table above, but the true bandwidth required for the link can only be determined by considering the peak write bandwidth to Virtual Disks participating in Metro Mirror or Global Mirror relationships and adding to it the peak synchronization copy bandwidth. if the link between the sites is configured with redundancy so that it can tolerate single failures then the link must be sized so that the bandwidth and latency statements continue to hold true even during such single failure conditions. 92
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

The configuration is tested to simulate failure of the primary site (to test the recovery capabilities and procedures) including eventual failback to the primary site from the secondary. The configuration must be tested to confirm that any failover mechanisms in the inter-cluster links interoperate satisfactorily with SVC. The Fibre Channel extender should be treated as a normal link. The bandwidth and latency measurements must be made by, or on behalf of the client, and are not part of the standard installation of the SVC by IBM. IBM recommends that these measurements are made during installation and that records are kept. Testing should be repeated following any significant changes to the equipment providing the inter-cluster link.

Global Mirror guidelines


When using SVC Global Mirror, all components in the SAN must be capable of sustaining the workload generated by application hosts, as well as the Global Mirror background copy workload. If this is not true, then Global Mirror may automatically stop your relationships to protect your application hosts from increased response times. Therefore, it is important to configure each component correctly. In addition, you should use a SAN performance monitoring tool, such as IBM System Storage Productivity Center, which will allow you to continuously monitor the SAN components for error conditions and performance problems. This will assist you to detect potential issues before they impact your disaster recovery solution. The long-distance link between the two clusters must be provisioned to allow for the peak application write workload to the Global Mirror source VDisks, plus the customer-defined level of background copy. The peak application write workload should ideally be determined by analyzing SVC performance statistics. Statistics should be gathered over a typical application I/O workload cycle, which may be days, weeks, or months depending on the environment in which SVC is used. These statistics should be used to find the peak write workload which the link must be able to support. Characteristics of the link may change with use: for example, the latency may increase as the link is used to carry an increased bandwidth. The user should be aware of the links behavior in such situations, and ensure that the link remains within the specified limits. If the characteristics are not known, testing should be performed to gain confidence of the links suitability. Users of Global Mirror should consider how to optimize the performance of the long-distance link. This will depend upon the technology used to implement the link. For example, when transmitting FC traffic over an IP link, it may be desirable to enable jumbo frames to improve efficiency It is supported to use Global Mirror and Metro Mirror between the same two clusters. It is not supported for cache-disabled VDisks to participate in a Global Mirror relationship. The gmlinktolerance parameter of the remote copy partnership must be set to an appropriate value. The default value is 300 seconds (5 minutes), which will be appropriate for most customers. During SAN maintenance, the user should either: reduce application IO workload for the duration of the maintenance (such that the degraded SAN component(s) are capable of the new workload), or disable the gmlinktolerance feature, or increase the gmlinktolerance value (meaning that application hosts may see extended response times from Global Mirror VDisks), or stop the Global Mirror relationships If the gmlinktolerance value is increased for maintenance lasting x minutes, it should be only be reset to the normal value x minutes after the end of the maintenance activity. If gmlinktolerance is disabled for the duration of the maintenance, it should be re-enabled once the maintenance is complete.

Chapter 3. Planning and configuration

93

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Global Mirror VDisks should have their preferred nodes evenly distributed between the nodes of the clusters. Each VDisk within an I/O group has a preferred node property that can be used to balance the I/O load between nodes in that group. This property is also used by Global Mirror to route I/O between clusters. Figure 3-21 shows the correct relationship between VDisks in a Metro Mirror or Global Mirror solution.

Figure 3-21 Correct VDisk relationship

The capabilities of the storage controllers at the secondary cluster must be provisioned to allow for the peak application workload to the Global Mirror VDisks, plus the customer-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary cluster can be limited by performance of the back-end storage controllers at the secondary cluster. To maximize the amount of I/O that applications may make to Global Mirror VDisks. We do not recommend using SATA for Metro Mirror or Global Mirror secondary VDisks without complete review. Be careful using slower disk subsystem for the secondary VDisks for high performance primary VDisks, SVC cache may not be able to buffer all the writes and flushing cache writes to SATA may slow I/O at production site. Global Mirror VDisks at the secondary cluster should be in dedicated MDisk groups (which contain no non-Global Mirror VDisks). Storage controllers should be configured to support the Global Mirror workload that is required of them. This might be achieved by dedicating storage controllers to only Global Mirror VDisks or configuring the controller to guarantee sufficient quality of service for the disks being used by Global Mirror or ensuring that physical disks are not shared between Global Mirror VDisks and other I/O (for example by not splitting an individual RAID array). MDisks within a Global Mirror MDisk group should be similar in their characteristics (for example RAID level, physical disk count, disk speed). This is true of all MDisk groups, but is particularly important to maintain performance when using Global Mirror. When a consistent relationship is stopped, for example, by a persistent I/O error on the inter-cluster link, the relationship enters the consistent_stopped state. I/O at the primary site continues but the updates are not mirrored to the secondary site. Restarting the relationship will begin the process of synchronizing new data to the secondary disk. While this is in progress, the relationship will be in the inconsistent_copying state. This means that the Global Mirror secondary VDisk will not be in a usable state until the copy has completed and the relationship has returned to a consistent state. Therefore, it is highly advisable to create a FlashCopy of the secondary VDisk before restarting the relationship. Once started, the FlashCopy will provide a consistent copy of the data, even while the

94

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

Global Mirror relationship is copying. If the Global Mirror relationship does not reach the synchronized state (if, for example, the inter-cluster link experiences further persistent I/O errors), then the FlashCopy target can be used at the secondary site for disaster recovery purposes. If you are planning to use an FCIP inter-cluster link it is very important to design and size the pipe correctly. Example 3-2 on page 95 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example

Amount of write data within 24 hours times 4 to allow for peaks Translate into MB/s to determine WAN link needed Example: 250 GB a day 250GB * 4 = 1TB 24hours * 3600secs/hr. = 86400secs 1,000,000,000,000/ 86400 = approximately. 12MB/s Which means OC3 or higher is needed(155 Mbps or higher) If compression is available on routers or WAN communication devices then smaller pipelines may be adequate. Note that workload is probably not evenly spread across 24 hours. If there are extended periods of high data change rates then you may want to consider suspending Global Mirror during that time frame. If the network bandwidth is too small to handle the traffic then application write I/O response times may be elongated. For SVC, Global Mirror must support short term Peak Write bandwidth requirements. Keep in mind that SVC Global Mirror is much more sensitive to lack of bandwidth than the DS8000. You will need to consider initial sync and re-sync workload as well. The Global Mirror partnerships background copy rate must be set to a value appropriate to the link and secondary back-end storage. Keep in mind, the more bandwidth you give to the sync and re-sync operation, the less workload can be delivered by the SVC for the regular data traffic. Metro Mirror or Global Mirror background copy rate is pre-defined and the per VDisk limit is 25 MBps, and the maximum per I/O Group is roughly 250 MBps. Be careful using when Space-efficient secondary VDisks at the DR site, because a Space-efficient VDisk could have performance of up to 50% less of a normal VDisk, and could impact the performance of the VDisks at the primary site. Do not propose Global Mirror if the data change rate will exceed communication bandwidth or round trip latency exceeds 80-120ms. Greater then 80ms requires SCORE/RPQ submission.

3.3.10 SAN boot support


The SVC supports SAN boot for AIX and Windows 2003 and other operating systems. SAN boot support could change from time to time, so we recommend regularly checking the following Web site: http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html

Chapter 3. Planning and configuration

95

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

3.3.11 Data migration from non-virtualized storage subsystem


Data migration is a very important part of an SVC implementation. So a data migration plan must be accurately prepared. You may need to migrate your data because you are: Redistributing workload within a cluster across the disk subsystem Moving workload onto newly installed storage Moving workload off old or failing storage, ahead of decommissioning it Moving workload to rebalance a changed workload Migrating data from an older disk subsystem to SVC managed storage Migrating data from one disk subsystem to another As there is more than one data migration method, we suggest that you choose the data migration method that best fits your environment, your operating system platform, your kind of data, and your applications service level agreement. We can define data migration as belonging to three groups: Based on operating system LVM or commands Based on special data migration software Based on SVC data migration feature With data migration we recommend that you apply the following guidelines: Choose what data migration method best fits your operating system platform, your kind of data, and your service level agreement. Check the interoperability matrix for the storage subsystem where your data is being migrated to: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Choose where you want to place your data after migration in terms of the MDG related to a specific storage subsystem tier. Check if a sufficient amount of free space or extents are available in the target MDG. Decide if your data is critical and needs to be protected by a VDisk mirroring option and/or it has to be replicated in a remote site for disaster recovery. Prepare off-line all the zone and LUN masking/host mappings you may need in order to minimize down-time during the migration. Check in the interoperability matrix if any device driver, HBA firmware, multipathing software or operating system upgrade is needed. http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Prepare a detailed operation plan in order that you do not overlook anything at data migration time. Execute a data back-up before you start any data migration. This should be part of the regular data management process. You may want to use the SVC as a data mover in order to migrate data from a non-virtualized storage subsystem to another non-virtualized storage subsystem. In this case you may have to add some additional checks related to the specific storage subsystem where you want to migrate to. Be careful using slower disk subsystems for the secondary VDisks for high performance primary VDisks, SVC cache may not be able to buffer all the writes and flushing cache writes to SATA may slow I/O at the production site.

3.3.12 SVC configuration back-up procedure


We recommend that you save the configuration externally, when changes such as adding new nodes, disk subsystems, and so on, have been performed on the cluster. Configuration saving is a crucial part of the SVC management and different methods can be applied in order to back-up your SVC configuration. We suggest to implement an automatic 96
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

configuration back-up applying the configuration back-up command and you can find this described for the CLI and the GUI in Chapter 7, SVC operations using the CLI on page 337, and Chapter 8, SVC operations using the GUI on page 469.

3.4 Performance considerations


While storage virtualization with the SVC improves flexibility and provides simpler management of a storage infrastructure, it can also provide a substantial performance advantage for a variety of workloads. The SVCs caching capability and its ability to stripe VDisks across multiple disk arrays is the reason why performance improvement is significant when implemented with midrange disk subsystems, since this technology is often only provided with high-end enterprise disk subsystems. Note: Technically, almost all storage controllers provide both striping (RAID 1 or RAID 10) and some form of caching. The real advantage is the degree to which you can stripe the data, that is, across all MDisks in a group, and therefore have the maximum number of spindles active at once. The caching is a secondary reason and the point is that the SVC provides additional caching above what midrange controllers provide (usually a couple of GB), whereas enterprise systems have much larger caches. To ensure the desired performance and capacity of your storage infrastructure, we recommend that you do a performance and capacity analysis to reveal the business requirements of your storage environment. When this is done, you can use the guidelines in this chapter to design a solution that meets the business requirements. When discussing performance for a system, it always comes down to identifying the bottleneck, and thereby the limiting factor of a given system. At the same time, you must take into consideration the component for whose workload you do identify a limiting factor, since it might not be the same component that is identified as the limiting factor for different workloads. When designing a storage infrastructure using SVC, or implementing SVC in an existing storage infrastructure, you must therefore take into consideration the performance and capacity of the SAN, the disk subsystems, the SVC, and the known/expected workload.

3.4.1 SAN
The SVC now has different models: 2145-4F2, 2145-8F2, 2145-8F4, 2145-8G4, 2145-8A4 and 2145-CF8. All of them can connect to 2 Gbps, 4 Gbps or 8 Gbps switches. From a performance point of view, it is better to connect the SVC to 8 Gbps switches. Correct zoning on the SAN switch will bring security and performance together. We recommend to implement a dual HBA approach at the host to access the SVC,

3.4.2 Disk subsystems


From a performance perspective, there are a few guidelines in connecting to an SVC: Connect all storage ports to the switch and zone them to all the SVC ports. You zone all ports on the disk back-end storage to all ports on the SVC nodes in a cluster. And you must also make sure to configure the storage subsystem LUN masking settings to map all LUNs to all the SVC WWPNs in the cluster. The SVC is designed to handle large quantities of multiple paths from back-end storage. Using as many as possible 15K RPM disks will improve performance considerably.
Chapter 3. Planning and configuration

97

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Creating one LUN per array will help in a sequential workload environment. In most cases, the SVC will be able to improve the performance, especially on middle-low end disk subsystems or older disk subsystems with slow controllers or un-cached disk systems. This improvement happens because: The SVC has the capability to stripe across disk arrays and it can do so across the entire set of supported physical disk resources. The SVC has a 4 GB or 8 GB or 24 GB cache in the latest 2145-CF8 model and it has an advanced caching mechanism. The SVCs large cache and advanced cache management algorithms also allow it to improve upon the performance of many types of underlying disk technologies. SVCs capability to manage, in the background, the destaging operations incurred by writes (while still supporting full data integrity), has the potential to be particularly important in achieving good database performance. Depending upon the size, age, and technology level of the disk storage system, the total cache available in the SVC may be larger, smaller, or about the same as that associated with the disk storage. Since hits can occur in either the upper (SVC) or the lower (disk controller) level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if the storage control level of cache has the greater capacity, hits to this cache should be expected to occur, in addition to hits in the SVC cache. Also, regardless of their relative capacities, both levels of cache will tend to play an important role in allowing sequentially organized data to flow smoothly through the system. SVC cannot increase the throughput potential of the underlying disks in all cases. Its ability to do so depends upon both the underlying storage technology, as well as the degree to which the workload exhibits hot spots or sensitivity to cache size or cache algorithms. IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, shows the SVCs cache partitioning capability: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open

3.4.3 SVC
The SVC cluster is scalable up to eight nodes, and the performance is almost linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back end without overloading a single disk or array. Assuming hat there are no bottlenecks in the SAN or on the disk subsystem, keep in mind that some specific guidelines must be followed when you are creating: Managed Disk Group Virtual Disks Connecting or configuring hosts the must receive disk space from a SVC cluster. More detailed information about performance and best practices for the SVC can be found at: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

98

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch03.Planning and Configuration Angelo.fm

3.4.4 Performance monitoring


Performance monitoring should be an integral part of the overall IT environment. For the SVC, just as for the other IBM storage subsystems, the official IBM tool to collect performance statistics and supply a performance report is the TotalStorage Productivity Center (TPC-SE). More information about using TPC to monitor your storage subsystem is covered in Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364, found at: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open We remind you also to xxxxxxx please Jon insert cross reference to ch.8 from Pall advanced operation xxxxxx for more detailed information about collecting performance statistics.

Chapter 3. Planning and configuration

99

6423ch03.Planning and Configuration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

100

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Chapter 4.

SVC initial configuration


In this chapter, we will cover: Managing the cluster SSPC overview SVC Hardware Management Console SVC initial configuration steps SVC ICA application upgrade

Copyright IBM Corp. 2009. All rights reserved.

101

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

4.1 Managing the cluster


There are three different way to manage the SVC. Using the System Storage Productivity Center (SSPC) Using a SVC Management Console Using a PuTTY based SVC Command Line Interface Figure 4-1 shows the three different ways to manage an SVC cluster.

HMC icat http:// Putty client

SSPC icat http:// Putty client TPC-SE

OEM Desktop http:// Putty client

Figure 4-1 SVC cluster management

You still have full management of the SVC independent of the method you choose. SSPC is supplied by default when you purchase your SVC cluster. If you already have a previously installed SVC cluster in your environment it is possible you are using the SVC console (HMC). You can still use it together with SSPC with the proviso that you can only login to your SVC from one of them a time. If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are using the SVC console or SSPC as the SVC CLI is based on the PuTTy interface and can be installed anywhere.

4.1.1 TCP/IP requirements for SAN Volume Controller


To plan your installation, consider the TCP/IP address requirements of the SAN Volume Controller cluster and the requirements for the SAN Volume Controller to access other services. You must also plan address allocation and Ethernet router, gateway and firewall configuration to provide the required access and network security. Figure 4-2 shows the TCP/IP ports and services that are used by SAN Volume Controller.

102

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-2 TCP/IP Ports

For more information about TCP/IP prerequisites see Chapter 3, Planning and configuration on page 63 and also the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824. In order to start an SVC initial configuration, a common flowchart that will cover all the types of management adopted will be similar to that shown in Figure 4-3.

Chapter 4. SVC initial configuration

103

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-3 SVC initial configuration flowchart

In the next sections we describe each of the steps shown in Figure 4-3.

104

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

4.2 Systems Storage Productivity Center overview


The System Storage Productivity Center (SSPC) is an integrated hardware and software solution that provides a single management console for managing IBM SAN Volume controller, IBM DS8000, and other components of your data storage infrastructure. The current release of SSPCconsists of the following components: IBM Tivoli Storage Productivity Center Basic Edition 4.1.1 IBM Tivoli Storage Productivity Center Basic Edition 4.1.1 is pre-installed on the System Storage Productivity Center server. Tivoli Storage Productivity Center for Replication (TPC-R) is pre-installed, and an additional licence is required IBM SAN Volume Controller Console 5.1.0 IBM SAN Volume Controller Console 5.1.0 is pre-installed on the System Storage Productivity Center server. Because this level of the console no longer requires a CIM agent to communicate with the SAN Volume Controller, a CIM agent is not installed with the console. Instead, you can use the CIM agent that is embedded in the SAN Volume Controller hardware. To manage prior levels of the SAN Volume Controller, install the corresponding CIM agent on the SSPC server. PuTTY remains installed on the System Storage Productivity Center and is available for key generation. IBM System Storage DS Storage Manager 10.60 is available for you to optionally install on the System Storage Productivity Center server, or on a remote server. The DS Storage Manager 10.60 can manage the IBM DS3000, IBM DS4000, and IBM DS5000. With DS Storage Manager 10.60, when you use Tivoli Storage Productivity Center to add and discover a DS CIM agent you can launch the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity Center. IBM Java 1.5 is preinstalled. IBM Java is preinstalled and supports DS Storage Manager 10.60. You do not need to download Java from Sun Microsystems. DS CIM agent management commands. The DS CIM agent management commands (DSCIMCLI) for 5.4.3 are preinstalled on the System Storage Figure 4-4 shows the product stack in the SSPC Console 1.4

Chapter 4. SVC initial configuration

105

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-4 SSPC 1.4 products stack

This replaces the functionality of the SVC Master Console (MC), which was a dedicated management console for the SVC. The Master Console is still supported and will run the latest code levels of the SVC Console software components. SSPC has all the software components pre-installed and tested on a System xTM machine model SSPC 2805-MC4 with Windows installed on it. All the software components installed on the SSPC can be ordered and installed on hardware that meets or exceeds minimum requirements. The SVC Console software components are also available on the Web. When using the SSPC with the SAN Volume Controller, you have to install it and configure it before configuring the SAN Volume Controller. For a detailed guide to the SSPC, we recommend that you refer to the IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823. For information pertaining to physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 63.

4.2.1 SSPC hardware


The hardware used by the SSPC Solution is the IBM System Storage Productivity Center 2805-MC4. It is a 1U rack mounted server. It has the following initial configuration: One Intel Xeon quad-core central processing unit, with speed of 2.4 GHz, cache of 8 MB, and power consumption of 80 W 8 GB of RAM (eight 1-inch dual inline memory modules of double-data-rate 3 (DDR3) memory, with a data rate of 1333 MHz Two 146 GB hard disk drives, each with a speed of 15 K rpm One Broadcom 6708 Ethernet card

106

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise Edition It is designed to perform basic SSPC functions. If you plan to upgrade SSPC for more functions, you can purchase the Performance Upgrade Kit to add more capacity to your hardware.

4.2.2 SVC installation planning information for SSPC


Consider the following steps when planning the SSPC installation: Verify that the hardware and software prerequisites have been met Determine the location of the rack where the SSPC is to be installed Verify that the SSPC will be installed in line of sight to the SVC nodes Verify that you have a keyboard, mouse, and monitor available to use Determine the cabling required Determine the network IP address Determine the SSPC host name For detailed installation guidance, refer to the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824, at: https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind= 5000033&familyind=5356448 and the IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337 at: http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597 Figure 4-5 shows the front view of the SSPC Console based on the 2805-MC4 hardware.

Figure 4-5 SSPC 2805-MC4 front view

Figure 4-6 shows a rear view of SSPC Console based on the 2805-MC4 hardware.

Chapter 4. SVC initial configuration

107

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-6 SSPC 2805-MC4 rear view

4.3 SVC Hardware Management Console


For an existing SVC environment where an SVC Hardware Management Console (HMC) is already operational, or where it is customer supplied, the SVC HMC requires that you obtain the following software: Operating system: The SVC HMC requires that one of the following operating systems is provided on your hardware platform: Microsoft Windows Server 2003 Standard Edition Microsoft Windows Server 2003 Enterprise Edition

Microsoft Windows Internet Explorer Version 7.0 Antivirus software (not required but strongly recommended) PuTTY Version 0.60 (if not installed) You can obtain the latest copy of PuTTY by going to the following Web site and downloading the Windows installer in the Binaries section: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Consideration: If you want to use IPv6, then you must be running Windows 2003 Server. SVC ICAT Application For a complete and current list of the supported software levels for the SVC Console, refer to the SVC Support page at: http://www.ibm.com/storage/support/2145

4.3.1 SVC installation planning information for the HMC


Consider the following steps when planning for HMC installation: Verify that the hardware and software prerequisites have been met. Determine the location of the rack where the HMC is to be installed. Verify that the HMC will be installed in line of sight to the SVC nodes. Verify that you have a keyboard, mouse, and monitor available to use. 108
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Determine the cabling required. Determine the network IP address. Determine the HMC host name. For detailed installation guidance, refer to the IBM System Storage SAN Volume Controller: Master Console Guide, SC27-2223 at the following link: http://www-01.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH&d c=DA400&q1=english&q2=-Japanese&uid=ssg1S7002609&loc=en_US&cs=utf-8&lang=en

4.4 SVC Cluster Set Up


This section provides step-by-step instructions for building the SVC cluster initially.

4.4.1 Creating the cluster (first time) using the service panel
This section provides the step-by-step instructions needed to create the cluster for the first time using the service panel. Use Figure 4-7 as a reference for the SVC 2145-8F2 and 2145-8F4 node model buttons to be pushed in the steps that follow, Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4 model. and Figure 4-9 is a reference for the SVC Node 2145-CF8.

Chapter 4. SVC initial configuration

109

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-7 SVC 8F2 Node and SVC 8F4 Node front and operator panel

110

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-8 SVC 8G4 Node front and operator panel

Figure 4-9 Shows the CF8 model front panel.

Chapter 4. SVC initial configuration

111

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-9 CF8 front panel

4.4.2 Prerequisites
Ensure that the SVC nodes are physically installed. Prior to configuring the cluster, ensure that the following information is available: License: The license indicates whether the customer is permitted to use FlashCopy, MetroMirror, or both. It also indicates how much capacity the customer is licensed to virtualize. For IPv4 addressing: Cluster IPv4 addresses: These include one for the cluster and another for the service address IPv4 Subnet mask Gateway IPv4 Address For IPv6 addressing: Cluster IPv6 addresses: These include one for the cluster and another for the service address IPv6 prefix Gateway IPv6 address

4.4.3 Initial configuration using the service panel


After the hardware is physically installed into racks, complete the following steps to initially configure the cluster through the service panel: 112

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

1. Choose any node that is to become a member of the cluster being created. 2. At the service panel of that node, click and release the Up or Down navigation button continuously until Node: is displayed. Important: If a timeout occurs when entering the input for the fields during these steps, you must begin again from step 2. All the changes are lost, so be sure to have all the information on hand before beginning. 3. Click and release the Left or Right navigation button continuously until Create Cluster? is displayed. Click the Select button. 4. If IPv4 Address: is displayed on line 1 of the service display, go to step 5. If Delete Cluster? is displayed in line 1 of the service display, this node is already a member of a cluster. Either the wrong node was selected, or this node was already used in a previous cluster. The ID of this existing cluster is displayed in line 2 of the service display. a. If the wrong node was selected, this procedure can be exited by clicking the Left, Right, Up, or Down button (it cancels automatically after 60 seconds). b. If it is certain that the existing cluster is not required, follow these steps: i. Click and hold the Up button. ii. Click and release the Select button. Then release the Up button. This deletes the cluster information from the node. Go back to step 1 and start again. Important: When a cluster is deleted, all client data contained in that cluster is lost. 5. If you are creating the cluster with IPv4, then click the Select button, otherwise for IPv6 press the down arrow to display IPv6 Address: and click the Select button. 6. Use the Up or Down navigation button to change the value of the first field of the IP address to the value that has been chosen. Note: For IPv4, pressing and holding the Up or Down buttons will increment or decrease the IP address field by units of 10. The field value rotates from 0 to 255 with the Down button, and from 255 to 0 with the Up button. For IPv6, you do the same except that it is a 4 digit hexadecimal field and the individual characters will increment. 7. Use the Right navigation button to move to the next field. Use the Up or Down navigation buttons to change the value of this field. 8. Repeat step 7 for each of the remaining fields of the IP address. 9. When the last field of the IP address has been changed, click the Select button. 10.Click the Right button. a. For IPv4, IPv4 Subnet: is displayed. b. For IPv6, IPv6 Prefix: is displayed. 11.Click the Select button. 12.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were changed. There is only a single field for IPv6 Prefix. 13.When the last field of IPv4 Subnet/IPv6 Mask has been changed, click the Select button.

Chapter 4. SVC initial configuration

113

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

14.Click the Right navigation button. a. For IPv4, IPv4 Gateway: is displayed. b. For IPv6, IPv6 Gateway: is displayed. 15.Click the Select button. 16.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address fields were changed. 17.When changes to all Gateway fields have been made, click the Select button. 18.Click the Right navigation button. a. For IPv4, IPv4 Create Now? is displayed. b. For IPv6, IPv6 Create Now? is displayed. 19.When the settings have all been verified as accurate, click the Select navigation button. To review the settings before creating the cluster, use the Right and Left buttons. Make any necessary changes, return to Create Now?, and click the Select button. If the cluster is created successfully, Password: is displayed in line 1 of the service display panel. Line 2 contains a randomly generated password, which is used to complete the cluster configuration in the next section. Important: Make a note of this password now. It is case sensitive. The password is displayed only for approximately 60 seconds. If the password is not recorded, the cluster configuration procedure must be started again from the beginning. 20.When Cluster: is displayed in line 1 of the service display and the Password: display timed out, then the cluster was created successfully. Also, the cluster IP address is displayed on line 2 when the initial creation of the cluster is completed. If the cluster is not created, Create Failed: is displayed in line 1 of the service display. Line 2 contains an error code. Refer to the error codes that are documented in IBM System Storage SAN Volume Controller: Service Guide, GC26-7901, to find the reason why the cluster creation failed and what corrective action to take. Important: At this time, do not repeat this procedure to add other nodes to the cluster. Adding nodes to the cluster is accomplished in 7.8.2, Adding a node on page 388 and in 8.10.3, Adding nodes to the cluster on page 559.

4.5 Adding the cluster to the SSPC or the SVC HMC


After you have performed the activities in 4.4, SVC Cluster Set Up on page 109, you need to complete the cluster setup using the SAN Volume Controller Console. Follow 4.5.1, Configuring the GUI on page 114 to create the cluster and complete the configuration. Note: Make sure that the SVC Cluster IP address (svcclusterip) can be reached successfully with a ping command from the SVC console.

4.5.1 Configuring the GUI


If this is the first time that the SAN Volume Controller administration GUI is being used, you must configure it as explained here:

114

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

1. Open the GUI using one of the following methods: Double-click the icon marked SAN Volume Controller Console on the SVC Consoles desktop. Open a Web browser on the SVC Console and point to this address: http://localhost:9080/ica (We accessed the SVC Console using this method.) Open a Web browser on a separate workstation and point to this address: http://svcconsoleipaddress:9080/ica Figure 4-10 SVC 5.1 shows the SVC Welcome screen.

Figure 4-10 Welcome screen

2. Click the Add SAN Volume Controller Cluster button and you will be presented with the screen shown in Figure 4-11.

Figure 4-11 Adding the SVC cluster ip address

Chapter 4. SVC initial configuration

115

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Important: Do not forget to mark the field with Create Initialize Cluster. Without this flag you will not be able to Initialize the cluster and you will get the error message CMMVC5753E. Figure 4-12 shows the CMMVC5753E error.

Figure 4-12 CMMVC5753E error

3. Click OK and a pop-up window appears and prompts for the user ID and password of the SVC cluster, as shown in Figure 4-13. Enter the user ID admin and the cluster admin password that was set earlier in 4.4.1, Creating the cluster (first time) using the service panel on page 109 and click OK.

Figure 4-13 SVC cluster user ID and password sign-on window

4. The browser accesses the SVC and displays the Create New Cluster wizard window, as shown in Figure 4-14. Click Continue.

116

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-14 Create New Cluster wizard

5. At the Create New Cluster page (Figure 4-15), fill in the following details: A new superuser password to replace the random one that the cluster generated: The password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character, and a maximum of 15 characters. Note: Admin user previous used will be no longer needed. It will be replaced by the superuser user that will be created at the Cluster initialization time. That is because starting from SVC 5.1 the CIM Agent has been moved inside the SVC cluster. A service password to access the cluster for service operation: The password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character and a maximum of 15 characters. A cluster name: The cluster name is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character and a maximum of 15 characters. A service IP address to access the cluster for service operations. Choose between an automatically assigned IP address from DHCP or a static IP address. Note: The service IP address is different from the cluster IP address. However, because the service IP address is configured for the cluster, it must be on the same IP subnet. The fabric speed of the Fibre Channel network. The Administrator Password Policy check box, if selected, enables a user to reset the password from the service panel (this is helpful, for example, if the password is forgotten). This check box is optional.

Chapter 4. SVC initial configuration

117

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: The SVC should be in a secure room if this function is enabled, because anyone who knows the correct key sequence can reset the admin password. The key sequence is as follows: a. From the Cluster: menu item displayed on the service panel, click the Left or Right button until Recover Cluster? is displayed. b. Click the Select button. Service Access? should be displayed. c. Click and hold the Up button and then click and release the Select button. This generates a new random password. Write it down. Important: Be careful, because clicking and holding the Down button, and clicking and releasing the Select button, places the node in service mode. 6. Once you have filled in the details, click the Create New Cluster button ( Figure 4-15).

118

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-15 Cluster details

Important: Make sure you confirm and retain in a safe place the Administrator and Service password for future use. 7. A Creating New Cluster window will appear, as shown in Figure 4-16. Click Continue each time when prompted.

Chapter 4. SVC initial configuration

119

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-16 Creating new cluster

8. A Created New Cluster window will appear as shown in Figure 4-17, click Continue.

Figure 4-17 Created New Cluster

9. A Password Changed window will confirm that the password has been modified as shown in Figure 4-18, click Continue.

Figure 4-18 Password Changed

Note: By this time, the service panel display on the front of the configured node should display the cluster name entered previously (for example, ITSO-CLS3). 10.Then you will be redirected to the License setting screen as show in Figure 4-19. choose the type of license as appropriate to your purchase and click GO to continue.

120

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-19 License Settings

11.Next, he Featurization Settings window is displayed as shown in Figure 4-20. To continue, at a minimum the Virtualization Limit (Gigabytes) field must be filled out. If you are licensed for FlashCopy and Metro Mirror (the window reflects Remote Copy in this example), the Enabled radio buttons can also be selected here. Click the Set License Settings button.

Figure 4-20 Capacity License Settings

12.A confirmation window will confirm what the featurization settings have been set to as is shown in Figure 4-21. Click Continue.

Chapter 4. SVC initial configuration

121

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-21 Capacity Licensing Settings confirmation

13.A window confirming that you have successfully created the initial settings for the cluster will appear as shown in Figure 4-22.

Figure 4-22 Cluster successfully created

122

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

14.Closing the previous task screen by clicking on the X in the upper right corner will redirect you to the Viewing Clusters screen (the cluster will appear as unauthenticated). Selecting your cluster and clicking Go you will be asked to authenticate your access by inserting your predefined, superuser userid and password. Figure 4-23 shows the Viewing Clusters screen.

Figure 4-23 viewing Clusters screen

15.To complete the SVC Cluster configuration you have to perform the following steps: a. Add an additional node to the cluster b. Configure Secure Shell (SSH) keys for command line user as showed in section 4.6, Secure Shell overview and CIM Agent on page 123. c. Configure user authentication and authorization d. Set up Call home options e. Set up event notifications and inventory reporting f. Create Managed Disk Goups (MDG) g. Add MDisk to MDG h. Identify and create VDisks i. Create a map host objects map j. Identify and configure FlashCopy mappings, Metro Mirror relationship k. Back up configuration data All these steps are described in Chapter 7, SVC operations using the CLI on page 337, and Chapter 8, SVC operations using the GUI on page 469.

4.6 Secure Shell overview and CIM Agent


Prior to SVC version 5.1, Secure Shell (SSH) was used to secure data flow between the SVC cluster configuration node (SSH server) and a client, either a command-line client through the

Chapter 4. SVC initial configuration

123

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

command-line interface (CLI) or the CIMOM. The connection is secured by the means of a private key and public key pair: A public key and a private key are generated together as a pair. A public key is uploaded to the SSH server. A private key identifies the client and is checked against the public key during the connection. The private key must be protected. The SSH server must also identify itself with a specific host key. If the client does not have that host key yet, it is added to a list of known hosts. Secure Shell is the communication vehicle between the management system (usually the SSPC) and the SVC cluster. SSH is a client-server network application. The SVC cluster acts as the SSH server in this relationship. The SSH client provides a secure environment from which to connect to a remote machine. It uses the principles of public and private keys for authentication. For more informations about SSH, go to: http://en.wikipedia.org/wiki/Secure_Shell The communication interfaces prior to SVC version 5.1 are shown in Figure 4-24.

Figure 4-24 Communication interfaces

SSH keys are generated by the SSH client software. This includes a public key, which is uploaded and maintained by the cluster, and a private key that is kept private to the workstation that is running the SSH client. These keys authorize specific users to access the administration and service functions on the cluster. Each key pair is associated with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored on the cluster. New IDs and keys can be added and unwanted IDs and keys can be deleted. To use the CLI or, for the SVC graphical user interface (GUI) prior to SVC 5.1, an SSH client must be installed on that system, the SSH key pair must be generated on the client system, and the clients SSH public key must be stored on the SVC cluster or clusters. The SSPC and the HMC have to have the freeware implementation of SSH-2 for Windows called PuTTY pre-installed. This software provides the SSH client function for users logged into the SVC Console who want to invoke the CLI or GUI to manage the SVC cluster. 124
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Starting with SVC 5.1, the management design has been changed and the CIM agent has been moved into the SVC cluster. With SVC 5.1 SSH keys authentication is no longer needed for the GUI but only for the SVC Command Line Interface. Figure 4-25 shows the SVC management design.

Figure 4-25 SVC management design

4.6.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system. Note: These keys will be used in the step documented in 4.7, Using IPv6 on page 135. 1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client desktop, select Start Programs PuTTY PuTTYgen. 2. On the PuTTY Key Generator GUI window (Figure 4-26), generate the keys: a. Select the SSH2 RSA radio button. b. Leave the number of bits in a generated key value at 1024. c. Click Generate.

Chapter 4. SVC initial configuration

125

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-26 PuTTY key generator GUI

3. Move the cursor on the blank area in order to generate the keys. Note: The blank area indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labelled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This generates random characters to create a unique key pair. 4. After the keys are generated, save them for later use as follows: a. Click Save public key, as shown in Figure 4-27.

126

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-27 Saving the public key

b. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save. If another name or location is chosen, ensure that a record of them is kept, because the name and location of this SSH public key must be specified in the steps documented in 4.6.2, Uploading the SSH public key to the SVC cluster on page 128. Note: The PuTTY Key Generator saves the public key with no extension by default. We recommend that you use the string pub in naming the public key, for example, pubkey, to easily differentiate the SSH public key from the SSH private key. c. In the PuTTY Key Generator window, click Save private key. d. You are prompted with a warning message, as shown in Figure 4-28. Click Yes to save the private key without a passphrase.

Figure 4-28 Saving the private key without passphrase

e. When prompted, enter a name (for example, icat) and location for the private key (for example, C:\Support Utils\PuTTY). Click Save. If you choose another name or location, ensure that you keep a record of it, because the name and location of the SSH private key must be specified when the PuTTY session is configured in the steps documented in 4.7, Using IPv6 on page 135.
Chapter 4. SVC initial configuration

127

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

We suggest to use the default name icat.ppk because in SVC clusters running on versions prior to SVC 5.1 this key has been used for icat application authentication and must have this default name. Note: The PuTTY Key Generator saves the private key with the PPK extension. 5. Close the PuTTY Key Generator GUI. 6. Navigate to the directory where the private key was saved (for example, C:\Support Utils\PuTTY). 7. Copy the private key file (for example, icat.ppk) to the C:\Program Files\IBM\svcconsole\cimom directory. Important: If the private key was named something other than icat.ppk, make sure that you rename it to icat.ppk in the C:\Program Files\IBM\svcconsole\cimom folder. The GUI (which will be used later) expects the file to be called icat.ppk and for it to be in this location. This key is no longer used in SVC 5.1 but is still valid for previous version.

4.6.2 Uploading the SSH public key to the SVC cluster


After you have created your SSH keys pair you need to upload your SSH private key into the SVC Cluster as follows: 1. From your browser: http://svcconsoleipaddress:9080/ica Select Users, and then on the next screen select Create a User from the dropdown as shown Figure 4-29, and click Go.

Figure 4-29 Create user

2. From the Create a User screen insert the userid name you want to create, and the password. At the bottom of the screen select the Access Level you want to assign to your user (bear in mind that the Security Administrator is the maximum level) and choose the location where you want to upload the SSH pub key file you have created for this user as shown Figure 4-30. Click Ok.

128

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-30 Create user and password

3. You have completed your user creation process and uploaded the users SSH public key that will be paired later with the users private .ppk keys as described in 4.6.3, Configuring the PuTTY session for the CLI on page 129. Figure 4-31 shows the successful upload of the SSH admin key.

Figure 4-31 Adding the SSH admin key successfully

4. The basic setup requirements for the SVC cluster using the SVC cluster Web interface have now been completed.

4.6.3 Configuring the PuTTY session for the CLI


Before the CLI can be used, the PuTTY session must be configured using the SSH keys generated earlier in 4.6.1, Generating public and private SSH key pairs using PuTTY on page 125. Perform these steps to configure the PuTTY session on the SSH client system:

Chapter 4. SVC initial configuration

129

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

1. From the SSPC Windows desktop, select Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window. 2. In the PuTTY Configuration window (Figure 4-32), from the Category pane on the left, click Session, if it is not selected. Note: The items selected in the Category pane affect the content that appears in the right pane.

Figure 4-32 PuTTY Configuration window

3. In the right pane, under the Specify the destination you want to connect to section, select the SSH radio button. Under the Close window on exit section, select the Only on clean exit radio button. This ensures that if there are any connection errors, they will be displayed on the users screen. 4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to display the PuTTY SSH Configuration window, as shown in Figure 4-33.

130

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-33 PuTTY SSH Connection Configuration window

5. In the right pane, in the section Preferred SSH protocol version, select radio button 2. 6. From the Category pane on the left side of the PuTTY Configuration window, select Connection SSH Auth. 7. In the right pane, in the Private key file for authentication: field under the Authentication Parameters section, either browse to or type the fully qualified directory path and file name of the SSH client private key file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK). See Figure 4-34.

Chapter 4. SVC initial configuration

131

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-34 PuTTY Configuration: Private key location

8. From the Category pane on the left side of the PuTTY Configuration window, click Session. 9. In the right pane, follow these steps, as shown in Figure 4-35: a. Under the Load, save, or delete a stored session section, select Default Settings and click Save. b. For the Host Name (or IP address), type the IP address of the SVC cluster. c. In the Saved Sessions field, type a name (for example, SVC) to associate with this session. d. Click Save.

132

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-35 PuTTY Configuration: Saving a session

The PuTTY Configuration window can now either be closed or left open to continue. Tip: Normally, output that comes from the SVC is wider than the default PuTTY window size. We recommend that you change your PuTTY window appearance to use a font with a character size of 8. To do this, click the Appearance item in the Category tree, as shown in Figure 4-35, and then click Font. Choose a font with character size of 8.

4.6.4 Starting the PuTTY CLI session


The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the session as detailed here: 1. From the SVC Console desktop, open the PuTTY application by selecting Start Programs PuTTY. 2. On the PuTTY Configuration window (Figure 4-36), select the session saved earlier (in our example, ITSO-SVC1) and click Load. 3. Click Open.

Chapter 4. SVC initial configuration

133

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-36 Open PuTTY command-line session

4. If this is the first time the PuTTY application is being used since generating and uploading the SSH key pair, a PuTTY Security Alert window with a prompt pops up stating that there is a mismatch between the private and public keys, as shown in Figure 4-37. Click Yes, which invokes the CLI.

Figure 4-37 PuTTY Security Alert

5. At the Login as: prompt, type admin and press Enter (the user ID is case sensitive). As shown in Example 4-1, the private key used in this PuTTY session is now authenticated against the public key uploaded to the SVC cluster.
Example 4-1 Authenticating login as: admin Authenticating with public key "rsa-key-20080617" Last login: Wed Aug 18 03:30:21 2009 from 10.64.210.240 IBM_2145:ITSO-CL1:admin>

134

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

You have now completed the tasks required to configure the CLI for SVC administration from the SVC Console. You can close the PuTTY session. Continue with the next section to configure the GUI on the SVC Console.

4.6.5 Configuring SSH for AIX clients


To configure SSH for AIX clients, follow these steps: 1. The SVC cluster IP address must be able to be successfully reached using the ping command from the AIX workstation from which cluster access is desired. 2. Open SSL must be installed for OpenSSH to work.Install OpenSSH on the AIX client: a. Installation images can be found at: https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp http://sourceforge.net/projects/openssh-aix b. Follow the instructions carefully, as OpenSSL must be installed before using SSH. 3. Generate an SSH key pair: a. Run cd to go to the /.ssh directory. b. Run the command ssh-keygen -t rsa. c. The following message is displayed: Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa) d. Pressing Enter will use the default in parentheses above; otherwise, enter a file name (for example, aixkey) and press Enter. e. The following prompt is displayed: Enter a passphrase (empty for no passphrase) We recommend entering a passphrase when the CLI will be used interactively, as there is no other authentication when connecting through the CLI. After typing in the passphrase, press Enter. f. The following prompt is displayed: Enter same passphrase again: Type in the passphrase again and then press Enter again. g. A message is displayed indicating that the key pair has been created. The private key file will have the name entered above (for example, aixkey). The public key file will have the name entered above with an extension of .pub (for example, aixkey.pub). Note: If you are generating an SSH keypair so you can interactively use the CLI, we recommend that you use a passphrase so you will need to authenticate every time you connect to the cluster. It is possible to have a passphrase protected key for scripted usage, but you will have to use something like the expect command to have the passphrase parsed into the ssh command.

4.7 Using IPv6


SVC V4.3 introduced IPv6 functionality to the console and clusters. You can use IPv4, or IPv6 in a dual stack configuration. Migrating to (or from) IPv6 can be done remotely and is non-disruptive, except you need to remove and re-define the cluster to the SVC Console.
Chapter 4. SVC initial configuration

135

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: To remotely access the SVC Console and clusters running IPv6, you are required to run Internet Explorer 7 and have IPv6 configured on your local workstation.

4.7.1 Migrating a cluster from IPv4 to IPv6


As a prerequisite, you should have IPv6 already enabled and configured on the SSPC/Windows server running SVC Console. We have configured an interface with IPv4 and IPv6 addresses on the SSPC, as shown in Example 4-2.
Example 4-2 Output of ipconfig on SSPC C:\Documents and Settings\Administrator>ipconfig Windows IP Configuration

Ethernet adapter IPv6: Connection-specific IP Address. . . . . Subnet Mask . . . . IP Address. . . . . IP Address. . . . . Default Gateway . . DNS . . . . . . . . . . Suffix . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : :

10.0.1.115 255.255.255.0 2001:610::115 fe80::214:5eff:fecd:9352%5

1. Select Manage Cluster Modify IP Address, as shown in Figure 4-38.

Figure 4-38 Modify IP Addresses window

2. In the IPv6 section Figure 4-38: a. Select an IPv6 interface and click modify, then in the screen in Figure 4-39.

136

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

b. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. c. Type an IPv6 address in the Cluster IP field. d. Type an IPv6 address in the Service IP address field. e. Type an IPv6 gateway in the Gateway field. f. Click the Modify Settings button.

Figure 4-39 Modify IP Addresses - Adding IPv6 addresses

3. A confirmation window displays (Figure 4-40). You can click the X, on the top right hand corner, to close this tab.

Figure 4-40 Modify IP Addresses window

4. Before you remove the cluster from the SVC Console, you should test IPv6 connectivity using the ping command from a cmd.exe session on the SSPC (as shown in Example 4-3).
Example 4-3 Testing IPv6 connectivity to SVC Cluster C:\Documents and Settings\Administrator>ping 2001:0610:0000:0000:0000:0000:0000:119

Chapter 4. SVC initial configuration

137

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data: Reply Reply Reply Reply from from from from 2001:610::119: 2001:610::119: 2001:610::119: 2001:610::119: time=3ms time<1ms time<1ms time<1ms

Ping statistics for 2001:610::119: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 3ms, Average = 0ms

5. In the Viewing Clusters pane, in the GUI Welcome window, select the radio button next to the cluster you want to remove. Select Remove a Cluster from the drop-down menu and click Go. 6. The Viewing Clusters window will re-appear, without the cluster you have removed. Select Add a Cluster from the drop-down menu and click OK (Figure 4-41).

Figure 4-41 Adding a cluster

7. The Adding a Cluster screen will appear, enter your IPv6 address will appear as shown in Figure 4-42, click Ok.

Figure 4-42 iPv6 address

8. You will be asked to insert your CIM userid (superuser) and your password (default=passw0rd) as shown in Figure 4-43.

138

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-43 Maintaining SSH Keys window - Skip SSH Keys

9. The Viewing Clusters window will re-appear with the cluster displaying an IPv6 address, as shown in Figure 4-44. Launch the SAN Volume Controller Console for the cluster and go back into Modify IP Address, as you did in step 1.

Figure 4-44 Viewing Clusters window - Displaying new cluster using IPv6 address

10.In the Modify IP addresses window, select the IPv4 address port and select Clear Port Settings as shown Figure 4-45.

Chapter 4. SVC initial configuration

139

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-45 Clear Port Settings

11.A confirmation window will display, as shown in Figure 4-46. Click OK.

Figure 4-46 Confirmation of IP Address change window

12.A second window (Figure 4-47) will display, confirming the IPv4 stack has been disabled and the associated addresses have been removed. Click Return.

Figure 4-47 IPv4 stack has been removed

4.7.2 Migrating a cluster from IPv6 to IPv4


The process of migrating a cluster from IPv6 to IPv4 is identical to the process described in 4.7.1, Migrating a cluster from IPv4 to IPv6 on page 136, except you add IPv4 addresses and remove the IPv6 addresses.

140

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

4.8 Upgrading the SVC Console software


This section takes you through the steps to upgrade your existing SVC Console GUI. You can also use these steps to install a new SVC Console on another server. Proceed as follows: 1. Download the latest available version of the ICA application and check for compatibility with your running version from the following Web site: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002888 2. Save your account definitions, taking any notes about defined users, password and SSH keys, as these may need to be reused in case of any problems encountered during the GUI upgrade process. Example 4-4 shows how to list defined accounts using the CLI.
Example 4-4 Accounts list IBM_2145:ITSO-CLS3:admin>svcinfo lsuser id name ssh_key remote u 0 superuser no no 0 1 admin yes no 0 IBM_2145:ITSO-CLS3:admin>svcinfo lsuser 0 id 0 name superuser password yes ssh_key no remote no usergrp_id 0 usergrp_name SecurityAdmin IBM_2145:ITSO-CLS3:admin>svcinfo lsuser 1 id 1 name admin password yes ssh_key yes remote no usergrp_id 0 usergrp_name SecurityAdmin IBM_2145:ITSO-CLS3:admin> password usergrp_name yes SecurityAdmin yes SecurityAdmin

sergrp_id

3. Execute the setup.exe file from the location where you have saved and unzipped the latest SVC Console file. Figure 4-48 shows the location of the setup.exe on our system.

Chapter 4. SVC initial configuration

141

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-48 setup.exe file

4. The Installation wizard will start. This first window (as shown in Figure 4-49) asks you to: Shut down any running Windows programs Stop all SVC services Review the README file Figure 4-49 Shows how to stop SVC services.

142

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-49 Stop CIMOM service

5. Figure 4-50 shows the wizard Welcome screen.

Figure 4-50 Wizard welcome screen

Once you are ready, click Next. 6. The Installation will ask for the license agreement as shown in Figure 4-51.

Chapter 4. SVC initial configuration

143

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-51 License agreement screen

7. The installation should detect your existing SVC Console installation (if you are upgrading). If it does, it will ask you to: Select Preserve Configuration if you want to keep your old configuration. (You should make sure that this is checked.) Manually shut down the SVC Console services, namely: IBM System Storage SAN Volume Controller Pegasus Server Service Location Protocol IBM WebSphere Application Server V6 - SVC

There may be differences in the existing services, depending on the version you are upgrading from. Follow the instructions on the dialog wizard for which services to shut down, as shown in Figure 4-52.

144

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-52 Product Installation Check

Important: If you want to keep your SVC configuration, then make sure you check the Preserve Configuration check box. If you omit this, you will lose your entire SVC Console setup, and you will have to reconfigure your console as though it were a fresh install. 8. The installation wizard will then check that the appropriate services are shut down, will remove the previous version and take you to the Installation Confirmation window shown in Figure 4-53. If the wizard detects any problems, it first shows you a page detailing the possible problems, giving you time to fix them before proceeding.

Chapter 4. SVC initial configuration

145

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-53 Installation Confirmation

9. The progress of the installation is shown in Figure 4-54. For our environment, it took approximately 10 minutes to complete.

Figure 4-54 Installation Progress

10.The installation process will now start the migration for cluster user accounts. Starting with SVC 5.1 the CIMOM has been moved into the cluster and it is no longer present in the SVC console or SSPC. The CIMOM authentication login process will be performed in the ICA application when we launch the SVC management application.

146

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

As part of the migration input, Figure 4-55 shows where to input the the admin password to each of the clusters you originally own. This password was generated during the SVC cluster first creation and should be saved safely.

Figure 4-55 Migration input

11.At the end of the user accounts migration process you may get the error as shown in Figure 4-56.

Chapter 4. SVC initial configuration

147

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 4-56 SVC cluster user account migration error

This is normal behaviour because in our environment we have implemented just the superuser userid. The GUI upgrade wizard is intended to work just for user accounts it is not intended to be used for migrating the superuser user. If you do get this error, when you try to access your SVC cluster using the GUI, it will require you to enter the default CIMOM userid=superuser and password=passw0rd. That is because the superuser account has not been migrated and you will have to use the default in the meantime. 12.Clicking Next, the wizard will either restart all the appropriate SVC Console processes, or inform you that you will need to reboot, and then give you a summary of the installation. In this case, we were told we will need a reboot, as shown in Figure 4-57.

148

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

Figure 4-57 Installation summary

13.The wizard requires us to restart our computer (Figure 4-58).

Figure 4-58 Installation finished: requesting reboot

14.And finally, to see what the new interface looks like, you can launch the SVC Console by using the icon on the desktop. Login and confirm that the upgrade was successful by

Chapter 4. SVC initial configuration

149

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

noting the Console Version number on the right hand side of the window (under the graphic). See Figure 4-59.

Figure 4-59 Launching the upgraded SVC Console

You have now completed the upgrade of your SVC Console. To access the SVC you have to click on Clusters on the left pane and you will be redirected to the Viewing Clusters screen as shown in Figure 4-60.

Figure 4-60 Viewing Clusters

150

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch04.Initial Configuration Angelo and Werner.fm

As you can see the cluster is Unauthenticated. This is to be expected. Select the cluster, Click GO and launch the SAN Volume Controller Application, and you will be required to insert your CIMOM userid (superuser) and your password (password) as shown in Figure 4-61.

Figure 4-61 Sign on to Cluster

Finally, you can manage your SVC cluster as shown in Figure 4-62.

Figure 4-62 Cluster management screen

Chapter 4. SVC initial configuration

151

6423ch04.Initial Configuration Angelo and Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

152

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Chapter 5.

Host configuration
In this chapter, we describe the basic host configuration procedures required to connect supported hosts to the IBM System Storage SAN Volume Controller (SVC).

Copyright IBM Corp. 2009. All rights reserved.

153

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.1 SVC setup


Traditionally in IBM SAN Volume Controller environments, hosts were connected to an SVC via a SAN. In practical implementations which have high availability requirements (the majority of the target customers for SVC) the SAN is implemented as two separate fabrics providing a fault tolerant arrangement of two or more counterpart SANs. For the hosts each of the SANs provide alternate paths to the resources (VDisks) provided by the SVC. Starting with SVC 5.1 iSCSI is introduced as an alternative protocol to attaching hosts via a LAN to the SAN Volume Controller. However, within the SVC all communications with back-end storage subsystems, and with other SVC clusters, is still done via Fibre Channel. For iSCSI /LAN based access networks to the SVC using a single network or using two physically separated networks is supported. The iSCSI feature is a software feature provided by the SVC 5.1 code. It will therefore be available not only on the new CF8 nodes but also on any of the legacy nodes that support the SVC 5.1 release. The existing SVC node hardware has multiple 1 Gbps Ethernet ports. Until now only one has been used and that is for cluster configuration. With the introduction of iSCSI, both ports may now be used. Redundant paths to VDisks can be provided for the SAN as well as for the iSCSI environment. Figure 5-1 shows the different attachments supported with the SVC 5.1 release.

Figure 5-1 SVC Host Attachment Overview

5.1.1 FC/SAN setup overview


Hosts using Fibre Channel as the connection to an SVC are always connected to a SAN switch. For SVC configurations the use of two redundant SAN fabrics is strongly recommended. This implies that each server is equipped with a minimum of two HBAs, with each of the HBAs connected to a SAN switch in one of the two fabrics (assuming there is one port per HBA). SVC imposes no special limit on the Fibre Channel optical distance between SVC nodes and host servers. A server may therefore be attached to an edge switch in a core-edge configuration while the SVC cluster would be at the core. SVC supports up to three ISL hops 154
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

in the fabric and this means that the server and the SVC node may be separated by up to 5 actual FC links, four of which can be 10km long if long-wave SFPs are used. For high performance servers the basic rule is to avoid ISL hops, i.e. connect them to the same switch as the SVC is connected to, if possible. The following two limits have to be kept in mind when connecting host servers to an SVC: Up to 256 hosts per I/O group. This ends up in a total of 1024 hosts per cluster. Note that if the same host is connected to multiple I/O Groups of a cluster it counts as a host in each of these groups. 512 distinct configured Host WWPNs are supported per I/O Group. This limit is the sum of FC host ports and host iSCSI names (for each iSCSI name an internal WWPN is generated) associated with all of the hosts associated with a single I/O group. The access from a server to an SVC cluster via the SAN fabrics is defined by the use of zoning. The basic rules for host zoning to follow with the SVC are: For configurations of less than 64 hosts per cluster, the SAN Volume Controller supports a simple set of zoning rules that enable a small set of host zones to be created for different environments. Switch zones containing host HBAs must contain no more than 40 initiators in total including the SVC ports which act as initiators. Thus a valid zone would be 32 host ports plus 8 SVC ports. This restriction exists because the order N2 scaling of number of remote status change notification messages (RSCN) with number of initiators per zone [N] can cause problems. It is recommended however to zone using single HBA port zoning as described in the next paragraph. For configurations of more than 64 hosts per cluster, the SAN Volume Controller supports a more restrictive set of host zoning rules. Each HBA port must be placed into a separate zone. Also included in this zone should be exactly one port from each SVC node in the I/O Group(s) which are associated with this host. For configurations smaller than this it is recommended that hosts be zoned this way but it is not mandatory. Switch zones containing Host HBAs must contain host HBAs from similar hosts or similar HBAs in the same host. For example AIX and NT hosts must be in separate zones, as must QLogic and Emulex adapters. To obtain the best performance from a host with multiple FC ports the zoning should ensure that each FC port of a host is zoned with a different group of SVC ports. To obtain the best overall performance of the subsystem and to prevent overloading the workload to each SVC port should be equal. This will typically involve zoning approximately the same number of host FC ports to each SVC FC port. For any given VDisk, the number of paths through the SAN from the SVC nodes to a host must not exceed eight. For most configurations four paths to an I/O Group, i.e four paths to each VDisk provided by this I/O Group, is sufficient. Figure 5-2 shows an overview for a basic setup with servers having two single port HBAs each. The simplest way to connect them is: Try to distribute the actual hosts equally between two logical sets per I/O Group. Connect hosts from each set always to the same group of SVC ports. Such a port group includes exactly one port from each SVC node in the I/O Group. The definition of the correct connections are done by zoning. The port groups are defined as follows: Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both nodes, e.g. N1/N2 of I/O Group zero Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both nodes of I/O Group
Chapter 5. Host configuration

155

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

For these port groups aliases (per I/O Group) can be created: Fabric A: IOGRP0_PG1 --> N1_P1; N2_P1,IOGRP0_PG2 --> N1_P3;N2_P3 Fabric B: IOGRP0_PG1 --> N1_P4;N2_P4, IOGRP0_PG2 --> N1_P2;N2_P2 Creating host zones by always using the host port WWPN plus the PG1 alias for hosts in the first host set. Use always the host port WWPN plus the PG2 alias for hosts from the second host set. If a host has to be zoned to multiple I/O Groups simply add the PG1 or PG2 aliases from the specific I/O Groups to the host zone. Using this schema provides per host four paths to one I/O Group. It helps to maintain an equal distribution of host connections on the SVC ports. Figure 5-2 shows an overview of this host zoning schema.

Figure 5-2 Overview of four path host zoning

We recommend whenever possible to use the minimum number of paths that are necessary to achieve sufficient redundancy in the SAN environment. For SVC environments this means no more than four paths per I/O Group or VDisk. Keep in mind that all paths have to be managed by the multipath driver on the host side. If we assume a server connected via four ports to the SVC, then each VDisk is seen via eight paths. With 125 VDisks mapped to this server this means that the multipath driver has to support the handling of up to 1000 active paths (8*125). Details and current limitations for IBMs Subsystem Device Driver (SDD) can be found in: Storage Multipath Subsystem Device Driver Users Guide, GC52-1309-01 which can be downloaded from: http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1

156

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

For hosts using four HBA/ports with eight connections to an I/O Group the zoning schema shown in Figure 5-3 can be used. This schema can be combined with the above mentioned four path zoning schema.

Figure 5-3 Overview eight Path Host Zoning

5.1.2 Port mask


Starting with SVC V4.1 the concept of a port mask was added. With prior releases any particular host could see the same set of SCSI LUNS from each of the four FC ports in each node in a particular IO group. The port mask is associated with a host object. The port mask controls which SVC (target) ports any particular host can access. The port mask applies to logins from any of the host (initiator) ports associated with the Host object in the configuration model. The port mask consists of four binary bits, represented in the CLI as 0 or 1. The right-most bit is associated with Fibre Channel port 1 on each node. The leftmost bit is associated with port 4. A 1 in any particular bit position allows access to that port and a zero denies access. The default port mask is 1111, preserving the behavior of the product prior to the introduction of this feature. For each individual login between a host HBA port and an SVC node port, SVC decides whether to allow access or deny access by examining the port mask associated with the host object to which the host HBA belongs. If access is denied then SVC responds to SCSI commands as if the HBA port is not known to the SVC.

Chapter 5. Host configuration

157

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.2 iSCSI overview


iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets, and thereby leverages an existing IP network, instead of requiring Fibre Channel HBAs and SAN fabric infrastructure.

5.2.1 Initiators and targets


An iSCSI client, known as an (iSCSI) initiator, sends SCSI commands over an IP network to an iSCSI target. A single iSCSI initiator or iSCSI target is referenced to as an iSCSI Node. An iSCSI target refers to a storage resource located on an iSCSI server, or, to be more precise, to one of potentially many instances of iSCSI nodes running on that server as a "target".

5.2.2 Nodes
There are one or more iSCSI Nodes within a Network Entity. The iSCSI Node is accessible via one or more Network Portals. A Network Portal is a component of a Network Entity that has a TCP/IP network address and that may be used by an iSCSI Node. An iSCSI Node is identified by its unique iSCSI name and is referred to as an IQN. Keep in mind that this name serves only for the identification of the node it is not the nodes address and in iSCSI the name is separated from the addresses. This separation allows multiple iSCSI nodes to use the same addresses, or as it is implemented in the SVC, the same iSCSI node to use multiple addresses.

5.2.3 IQN
An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its own IQN which by default will be in the form: iqn.1986-03.com.ibm:2145.<clustername>.<nodename>. An iSCSI host in SVC is defined by specifying its iSCSI initiator name(s). An example for an IQN of a Windows Server could be: iqn.1991-05.com.microsoft:itsoserver01 During the configuration of an iSCSI host in the SVC the hosts initiator IQNs have to be specified. Details of how a host is created can be found in Chapter 7, SVC operations using the CLI on page 337, and Chapter 8, SVC operations using the GUI on page 469. An alias string may also be associated with an iSCSI Node. The alias allows an organization to associate a user friendly string with the iSCSI Name. However, the alias string is not a substitute for the iSCSI Name. An overview of how iSCSI is implemented in the SVC is shown in Figure 5-4.

158

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-4 SVC iSCSI Overview

A host that is using iSCSI as the communication protocol to access its VDisks on an SVC cluster uses its single or multiple Ethernet adapters to connect to an IP LAN. The nodes of the SVC cluster are connected to the LAN by the existing 1 Gbps Ethernet ports on the node. For iSCSI both ports can be used. Note that Ethernet link aggregation (port trunking) or channel bonding for the SVC nodes Ethernet ports are not supported for the 1Gbps ports in this release. The support for Jumbo Frames, i.e. support for MTU sizes greater than 1500 Bytes, is planned in future SVC releases. For each SVC node, i.e. for each instance of an iSCSI target node in the SVC node, two IPv4 and/or two IPv6 addresses or iSCSI Network portals, can be defined. One IPv4 and/or one IPv6 address per Ethernet Port as shown in Figure 2-12 on page 29.

5.3 VDisk discovery


Hosts may discover VDisks through one of the following three mechanisms: iSNS (Internet Storage Name Service) SVC can register itself with an iSNS name server; the IP address of this server is set using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets. SLP (Service Location Protocol) The SVC node runs an SLP daemon which responds to host requests. This daemon reports the available services on the node. One such service is the CIMOM which runs on the configuration node; iSCSI I/O service can now also be reported. SCSI Send Target request

Chapter 5. Host configuration

159

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260). The Network Portal IP addresses of the iSCSI targets have to be defined before a discovery can be started.

5.4 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable CHAP authentication. This involves the sharing of a CHAP secret between the cluster and the host. If the correct key is not provided by the host, then the SVC will not allow it to do I/O to VDisks. The cluster can also be assigned a CHAP secret. A new feature with iSCSI is the fact that IP addresses used to address an iSCSI target on the SVC node can be moved between the nodes of an I/O Group. IP addresses will only be moved from one node to its partner node if a nodes goes through a planned or unplanned restart. If the ethernet link to the SVC cluster fails because of a cause outside of the SVC (such as the cable being disconnected, the ethernet router failing, and so on) then the SVC makes no attempt to fail over an IP address to restore IP access to the cluster. To enable validation of the ethernet access to the nodes, it will respond to ping with the standard one-per-second rate without frame loss. With the SVC 5.1 release a new concept which is used for the handling of the iSCSI IP address failover, called a "clustered ethernet port", was introduced. A clustered ethernet port consists of one physical ethernet port on each node in the cluster and contains configuration settings that are shared by all these ports. These clustered ports are referred to as Port 1 and Port 2 in the CLI or GUI on each node of an SVC cluster. Clustered Ethernet ports can be used for iSCSI and/or management ports. An example of a iSCSI target node failover is shown in Figure 5-5. It gives a simplified overview of what happens during a planned or unplanned node restart in an SVC I/O Group: 1. During normal operation one iSCSI node target node instance is running on each SVC node. All the IP addresses (ipv4/ipv6) belonging to this iSCSI target, including the management addresses if the node acts as configuration node, are presented on the two ports (P1/P2) of a node. 2. During a restart of an SVC node (N1) the iSCSI initiator including all its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP addresses (if N1 acted as configuration node), will failover to Port1/Port2 of the partner node within the I/O Group, i.e, node N2. An iSCSI initiator running on a server will execute a reconnect to its iSCSI target, i.e. the same IP addresses presented now by a new node of the SVC cluster. 3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP addresses) running on N2 will failback to N1. Again the iSCSI initiator running on a server will execute a reconnect to its iSCSI target. The management addresses will not failback. N2 will remain in the role of the configuration node for this cluster.

160

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-5 iSCSI Node Failover Scenario

From the servers point of view it is not required to have a multipathing driver (MPIO) in place to be able to handle an SVC node failover. In the case of a node restart the server simply reconnects to the IP addresses of the iSCSI target node that will re-appear after several seconds on the ports of the partner node. A host multipathing driver for iSCSI is required if you want: To protect a server from network link failures including port failures on the SVC nodes. To protect a server from a server HBA failure (if two HBAs are in use). To protect a server form network failures, if the server is connected via 2 HBAs to two separate networks. To provide load balancing on the servers HBA and the network links. The commands for the configuration of the iSCSI IP addresses has been separated from the configuration of the cluster IP addresses. The new commands for managing iSCSI IP addresses are: The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on each node in the cluster. The svctask cfgportip command assigns an IP address to each node ethernet port for iSCSI I/O. The new commands for managing the cluster IP addresses are: The svcinfo lsclusterip command returns a list of the cluster management IP addresses configured for each port. The svctask chclusterip command modifies the IP configuration parameters for the cluster. A detailed description of how to use these commands can be found in Chapter 7, SVC operations using the CLI on page 337.
Chapter 5. Host configuration

161

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

The parameters for remote services (ssh, web services) will remain associated with the cluster object. During a software upgrade from 4.3.1 the configuration settings for the cluster will be used to configure clustered Ethernet Port1. For iSCSI based access using two separate networks and separating iSCSI traffic within the networks by using dedicated VLANs path for storage traffic, will prevent any IP interface, switch or target port failure from compromising the host servers access to the VDisks LUNs.

5.5 AIX-specific information


The following section details specific information that relates to the connection of AIX-based hosts into an SVC environment. Note: In this section, the IBM System p information applies to all AIX hosts that are listed on the SAN Volume Controller interoperability support site, including IBM System i partitions and IBM JS blades.

162

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

5.5.1 Configuring the AIX host


To configure the AIX host, follow these steps: 1. Install the HBAs into the AIX host system. 2. Ensure that you have installed the correct operating systems and version levels on your host, including any updates and APARs (Authorized Program Analysis Reports) for the operating system. 3. Connect the AIX host system to the Fibre Channel switches. 4. Configure the Fibre Channel switches (zoning) if needed. 5. Install and configure the 2145 and SDD drivers. 6. Configure the host, VDisks, and host mapping on the SAN Volume Controller. 7. Run the cfgmgr command to discover the VDisks created on the SVC. The following sections detail the current support information. It is vital that you check the Web sites listed regularly for any updates.

5.5.2 Operating system versions and maintenance levels


At the time of writing, the following AIX levels are supported: AIX V4.3.3 AIX 5L V5.1 AIX 5L V5.2 AIX 5L V5.3 AIX V6.1.3 For the latest information, and device driver support, always refer to this site: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX

5.5.3 HBAs for IBM System p hosts


Ensure that your IBM System p AIX hosts use the correct host bus adapters (HBAs). The following IBM Web site provides current interoperability information about supported HBAs and firmware: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_pSeries Note: The maximum number of Fibre Channel ports that are supported in a single host (or logical partition) is four. This can be four single-port adapters or two dual-port adapters or a combination, as long as the maximum number of ports that are attached to the SAN Volume Controller does not exceed four.

Installing the host attachment script on IBM System p hosts


To attach an IBM System p AIX host, you must install the AIX host attachment script. Perform the following steps to install the host attachment scripts: 1. Access the following Web site: http://www.ibm.com/servers/storage/support/software/sdd/downloading.html

Chapter 5. Host configuration

163

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

2. Select Host Attachment Scripts for AIX. 3. Select either Host Attachment Script for SDDPCM or Host Attachment Scripts for SDD from the options, depending on your multipath device driver. 4. Download the AIX host attachment script for your multipath device driver. 5. Follow the instructions that are provided on the Web site or any readme files to install the script.

5.5.4 Configuring for fast fail and dynamic tracking


For hosts systems that run an AIX 5L V5.2 or later operating system, you can achieve the best results by using the fast fail and dynamic tracking attributes. Perform the following steps to configure your host system to use the fast fail and dynamic tracking attributes: 1. Issue the following command to set the Fibre Channel SCSI I/O Controller Protocol Device to each Adapter: chdev -l fscsi0 -a fc_err_recov=fast_fail The previous command was for adapter fscsi0. Example 5-1 shows the command for both adapters on our test system running AIX 5L V5.3.
Example 5-1 Enable Fast Fail

#chdev fscsi0 #chdev fscsi1

-l fscsi0 -a fc_err_recov=fast_fail changed -l fscsi1 -a fc_err_recov=fast_fail changed

2. Issue the following command to enable dynamic tracking for each Fibre Channel device: chdev -l fscsi0 -a dyntrk=yes The previous example command was for adapter fscsi0. Example 5-2 shows the command for both adapters on our test system running AIX 5L V5.3.
Example 5-2 Enable dynamic tracking

#chdev fscsi0 #chdev fscsi1

-l fscsi0 -a dyntrk=yes changed -l fscsi1 -a dyntrk=yes changed

Host adapter configuration settings


You can check the availability of the FC Host Adapters by using the command shown in Example 5-3.
Example 5-3 FC Host Adapter availability

#lsdev -Cc adapter |grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter

164

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

You can find the worldwide port number (WWPN) of your FC Host Adapter and check the firmware level, as shown in Example 5-4. The Network Address is the WWPN for the FC adapter.
Example 5-4 FC Host Adapter settings and WWPN

#lscfg -vpl fcs0 fcs0

U0.1-P2-I4/Q1

FC Adapter

Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1

5.5.5 Subsystem Device Driver (SDDPCM or SDD)


SDD is a pseudo device driver designed to support the multipath configuration environments within IBM products. It resides on a host system along with the native disk device driver and provides the following functions: Enhanced data availability Dynamic input/output (I/O) load balancing across multiple paths Automatic path failover protection Concurrent download of licensed internal code

Chapter 5. Host configuration

165

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

SDD works by grouping each physical path to an SVC LUN, represented by individual hdisk devices within AIX, into a vpath device (for example, if you have four physical paths to an SVC LUN, this produces four new hdisk devices within AIX). From this moment onwards, AIX uses this vpath device to route I/O to the SVC LUN. Therefore, when making an LVM volume group using mkvg, we specify the vpath device as the destination and not the hdisk device. The SDD support matrix for AIX is available at: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX

SDD / SDDPCM installation


After downloading the appropriate version of SDD, install it using the standard AIX installation procedure. The currently supported SDD Levels are available at: http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5 000033&familyind=5329528&taskind=2 Check the driver readmefile and make sure your AIX system fulfills all the prerequisites.

SDD installation
In Example 5-5, we show the appropriate version of SDD downloaded into the /tmp/sdd directory. From here we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDD. Finally, we initiate the installp command, which installs SDD onto this AIX host.
Example 5-5 Installing SDD on AIX

#ls -l total 3032 -rw-r----1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar #tar -tvf devices.sdd.53.rte.tar -rw-r----0 0 1536000 Oct 06 11:37:13 2006 devices.sdd.53.rte #tar -xvf devices.sdd.53.rte.tar x devices.sdd.53.rte, 1536000 bytes, 3000 media blocks. # inutoc . #ls -l total 6032 -rw-r--r-1 root system 476 Jun 24 15:33 .toc -rw-r----1 root system 1536000 Oct 06 2006 devices.sdd.53.rte -rw-r----1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar # installp -ac -d . all Example 5-6 checks the installation of SDD.
Example 5-6 Checking SDD device driver

#lslpp -l | grep -i sdd devices.sdd.53.rte devices.sdd.53.rte

1.7.0.0 1.7.0.0

COMMITTED COMMITTED

IBM Subsystem Device Driver IBM Subsystem Device Driver

Note: There no longer exists a specific 2145 devices.fcp file. The standard devices.fcp now has combined support for SVC / ESS / DS8000 / DS6000. We can also check that the SDD server is operational, as shown in Example 5-7 on page 167.

166

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Example 5-7 SDD server is operational

#lssrc -s sddsrv Subsystem sddsrv

Group

PID 168430

Status active

#ps -aef | grep sdd root 135174 41454 root 168430 127292 /usr/sbin/sddsrv

0 15:38:20 0 15:10:27

pts/1 -

0:00 grep sdd 0:00

Enabling the SDD or SDDPCM Web interface is shown in 5.15, Using SDDDSM, SDDPCM, and SDD Web interface on page 250.

SDDPCM installation
In Example 5-8, we show the appropriate version of SDDPCM downloaded into the /tmp/sddpcm directory. From here we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX host.
Example 5-8 Installing SDDPCM on AIX

# ls -l total 3232 -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # tar -tvf devices.sddpcm.61.rte.tar -rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte # tar -xvf devices.sddpcm.61.rte.tar x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks. # inutoc . # ls -l total 6432 -rw-r--r-1 root system 531 Jul 15 13:25 .toc -rw-r----1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # installp -ac -d . all Example 5-9 checks the installation of SDDPCM.
Example 5-9 Checking SDDPCM device driver

# lslpp -l | grep sddpcm devices.sddpcm.61.rte devices.sddpcm.61.rte

2.2.0.0 2.2.0.0

COMMITTED COMMITTED

IBM SDD PCM for AIX V61 IBM SDD PCM for AIX V61

Enabling the SDD or SDDPCM Web interface is shown in 5.15, Using SDDDSM, SDDPCM, and SDD Web interface on page 250.

Chapter 5. Host configuration

167

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3
Before adding a new volume from the SVC, the AIX host system Kanga had a vanilla configuration, as shown in Example 5-10.
Example 5-10 Status of AIX host system Kanaga

#lspv hdisk0 hdisk1 hdisk2 #lsvg rootvg

0009cddaea97bf61 0009cdda43c9dfd5 0009cddabaef1d99

rootvg rootvg rootvg

active active active

In Example 5-11, we show SVC configuration information relating to our AIX host, specifically, the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this configuration. Using the SVC CLI, we can check that the host WWPNs, listed in Example 5-4 on page 165, are logged into the SVC for the host definition aix_test, by entering: svcinfo lshost aix_test We can also find the serial numbers of the VDisks using the following command: svcinfo lshostvdiskmap
Example 5-11 SVC definitions for host system aix_test

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Kanaga id 2 name Kanaga port_count 2 type generic mask 1111 iogrp_count 2 WWPN 10000000C932A7FB node_logged_in_count 2 state active WWPN 10000000C932A800 node_logged_in_count 2 state active IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Kanaga id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 Kanaga 0 13 Kanaga0001 10000000C932A7FB 60050768018301BF2800000000000015 2 Kanaga 1 14 Kanaga0002 10000000C932A7FB 60050768018301BF2800000000000016 2 Kanaga 2 15 Kanaga0003 10000000C932A7FB 60050768018301BF2800000000000017 2 Kanaga 3 16 Kanaga0004 10000000C932A7FB 60050768018301BF2800000000000018 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0001 id 13 name Kanaga0001

168

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 5.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000015 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status offline sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 5.00GB real_capacity 5.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap Kanaga0001 id name SCSI_id host_id host_name wwpn vdisk_UID 13 Kanaga0001 0 2 Kanaga 10000000C932A7FB 60050768018301BF2800000000000015 13 Kanaga0001 0 2 Kanaga 10000000C932A800 60050768018301BF2800000000000015

Chapter 5. Host configuration

169

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

We need to run cfgmgr on the AIX host to discover the new disks and enable us to start the vpath configuration; if we run the config manager (cfgmgr) on each FC adapter, it will not create the vpaths, only the new hdisks. To configure the vpaths, we need to run the cfallvpath command after issuing the cfgmgr command on each of the FC adapters: # cfgmgr -l fcs0 # cfgmgr -l fcs1 # cfallvpath Alternatively, use the cfgmgr -vS command to check the complete system. This command will probe the devices sequentially across all FC adapters and attached disks; however, it is very time intensive: # cfgmgr -vS The raw SVC disk configuration of the AIX host system now appears as shown in Example 5-12. We can see the multiple hdisk devices, representing the multiple routes to the same SVC LUN, and we can see the vpath devices available for configuration.
Example 5-12 VDisks from SVC added with multiple different paths for each VDisk

#lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available hdisk6 Available hdisk7 Available hdisk8 Available hdisk9 Available hdisk10 Available hdisk11 Available hdisk12 Available hdisk13 Available hdisk14 Available hdisk15 Available hdisk16 Available hdisk17 Available hdisk18 Available vpath0 Available vpath1 Available vpath2 Available vpath3 Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1Z-08-02 1Z-08-02 1Z-08-02 1Z-08-02 1D-08-02 1D-08-02 1D-08-02 1D-08-02 1Z-08-02 1Z-08-02 1Z-08-02 1Z-08-02 1D-08-02 1D-08-02 1D-08-02 1D-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device Data Path Optimizer Pseudo Device Data Path Optimizer Pseudo Device Data Path Optimizer Pseudo Device Data Path Optimizer Pseudo Device

Driver Driver Driver Driver

To make a volumegroup (for example, itsoaixvg) to host the vpath1 device, we use the mkvg command passing the vpath device as a parameter instead of the hdisk device. This is shown in Example 5-13.
Example 5-13 Running the mkvg command

#mkvg -y itsoaixvg vpath1 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg

170

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Now, by running the lspv command, we can see that vpath1 has been assigned into the itsoaixvg volume group, as shown in Example 5-14.
Example 5-14 Showing the vpath assignment into the volume group

#lspv hdisk0 hdisk1 hdisk2 vpath1

0009cddaea97bf61 0009cdda43c9dfd5 0009cddabaef1d99 0009cddabce27ba5

rootvg rootvg rootvg itsoaixvg

active active active active

The lsvpcfg command also displays the new relationship between vpath1 and the itsoaixvg volume group, but also shows each hdisk associated to vpath1, as shown in Example 5-15.
Example 5-15 Displaying the vpath to hdisk to volume group relationship #lsvpcfg vpath0 (Avail vpath1 (Avail (Avail ) vpath2 (Avail vpath3 (Avail ) 60050768018301BF2800000000000015 = hdisk3 (Avail ) hdisk7 (Avail ) pv itsoaixvg) 60050768018301BF2800000000000016 = hdisk4 (Avail ) hdisk8 ) 60050768018301BF2800000000000017 = hdisk5 (Avail ) hdisk9 (Avail ) ) 60050768018301BF2800000000000018 = hdisk6 (Avail ) hdisk10 (Avail )

In Example 5-16, running the command lspv vpath1 shows a more verbose output for vpath1.
Example 5-16 Verbose details of vpath1

#lspv vpath1 PHYSICAL VOLUME: vpath1 VOLUME GROUP: PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: FREE PPs: 639 (5112 megabytes) HOT SPARE: USED PPs: 0 (0 megabytes) MAX REQUEST: FREE DISTRIBUTION: 128..128..127..128..128 USED DISTRIBUTION: 00..00..00..00..00

itsoaixvg

yes 0 2 no 256 kilobytes

Chapter 5. Host configuration

171

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.5.7 Using SDD


Within SDD, we are able to check the status of the adapters and devices now under SDD control with the use of the datapath command set. In Example 5-17, we can see the status of both HBA cards as NORMAL and ACTIVE.
Example 5-17 SDD commands used to check the availability of the adapters

#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 0 56 Errors 0 0 Paths 4 4 Active 1 1

In Example 5-18, we see detailed information about each vpath device. Initially, we see that vpath1 is the only vpath device in an open status. This is because it is the only vpath currently assigned to a volume group. Additionally, for vpath1, we see that only path #1 and path #3 have been selected (used) by SDD. This is because these are the two physical paths that connect to the preferred node of the I/O group of this SVC cluster. The remaining two paths within this vpath device are only accessed in a failover scenario.
Example 5-18 SDD commands used to check the availability of the devices

#datapath query device Total Devices : 4

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000015 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 CLOSE NORMAL 0 0 1 fscsi1/hdisk7 CLOSE NORMAL 0 0 2 fscsi0/hdisk11 CLOSE NORMAL 0 0 3 fscsi1/hdisk15 CLOSE NORMAL 0 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000016 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 0 0 1 fscsi1/hdisk8 OPEN NORMAL 28 0 2 fscsi0/hdisk12 OPEN NORMAL 32 0 3 fscsi1/hdisk16 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000017 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk5 CLOSE NORMAL 0 0 1 fscsi1/hdisk9 CLOSE NORMAL 0 0 2 fscsi0/hdisk13 CLOSE NORMAL 0 0 3 fscsi1/hdisk17 CLOSE NORMAL 0 0 172
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018301BF2800000000000018 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk6 CLOSE NORMAL 0 0 1 fscsi1/hdisk10 CLOSE NORMAL 0 0 2 fscsi0/hdisk14 CLOSE NORMAL 0 0 3 fscsi1/hdisk18 CLOSE NORMAL 0 0

5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD
The volume group itsoaixvg is created using vpath1, A logical volume is created using the volume group and then the file system created, testlv1, and mounted on the mount point /testlv1, as seen in Example 5-19.
Example 5-19 Host system new volume group and file system configuration

#lsvg -o itsoaixvg rootvg #lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE loglv01 jfs2log fslv00 jfs2 fslv01 jfs2 #df -g Filesystem GB blocks /dev/hd4 0.03 /dev/hd2 9.06 /dev/hd9var 0.03 /dev/hd3 0.12 /dev/hd1 0.03 /proc /dev/hd10opt 0.09 /dev/lv00 0.41 /dev/fslv00 2.00 /dev/fslv01 2.00

LPs 1 128 128

PPs 1 128 128

PVs 1 1 1

LV STATE open/syncd open/syncd open/syncd

MOUNT POINT N/A /teslv1 /teslv2

Free %Used 0.01 62% 4.32 53% 0.03 10% 0.12 7% 0.03 2% 0.01 86% 0.39 4% 2.00 1% 2.00 1%

Iused %Iused Mounted on 1357 31% / 17341 2% /usr 137 3% /var 31 1% /tmp 11 1% /home - /proc 1947 38% /opt 19 1% /usr/sys/inst.images 4 1% /teslv1 4 1% /teslv2

5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM
Before adding a new volume from the SVC, the AIX host system Atlantic had a vanilla configuration, as shown in Example 5-20.
Example 5-20 Status of AIX host system Kanaga

# lspv hdisk0 hdisk1 hdisk2 # lsvg rootvg

0009cdcaeb48d3a3 0009cdcac26dbb7c 0009cdcab5657239

rootvg rootvg rootvg

active active active

Chapter 5. Host configuration

173

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

In Example 5-22 on page 175, we show SVC configuration information relating to our AIX host, specifically the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this configuration. Our example host is named Atlantic. Example 5-21 shows the HBA information of our example host.
Example 5-21 HBA information example host Atlantic

## lsdev -Cc adapter | grep fcs fcs1 Available 1H-08 FC Adapter fcs2 Available 1D-08 FC Adapter # lscfg -vpl fcs1 fcs1 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A644 Manufacturer................001E Customer Card ID Number.....2765 FRU Number.................. 00P4495 Network Address.............10000000C932A865 ROS Level and ID............02C039D0 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401411 Device Specific.(Z5)........02C039D0 Device Specific.(Z6)........064339D0 Device Specific.(Z7)........074339D0 Device Specific.(Z8)........20000000C932A865 Device Specific.(Z9)........CS3.93A0 Device Specific.(ZA)........C1D3.93A0 Device Specific.(ZB)........C2D3.93A0 Device Specific.(ZC)........00000000 Hardware Location Code......U0.1-P2-I4/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 ## lscfg -vpl fcs2 fcs2 U0.1-P2-I5/Q1 FC Adapter Part Number.................80P4383 EC Level....................A Serial Number...............1F5350CD42 Manufacturer................001F Customer Card ID Number.....2765 FRU Number.................. 80P4384 Network Address.............10000000C94C8C1C

174

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C94C8C1C Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(ZC)........00000000 Hardware Location Code......U0.1-P2-I5/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 # Using the SVC CLI, we can check that the host WWPNs, as listed in Example 5-22, are logged into the SVC for the host definition Atlantic, by entering: svcinfo lshost Atlantic We can also find the serial numbers of the VDisks using the following command: svcinfo lshostvdiskmap Atlantic
Example 5-22 SVC definitions for host system Atlantic

IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic id 8 name Atlantic port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C94C8C1C node_logged_in_count 2 state active WWPN 10000000C932A865 node_logged_in_count 2 state active IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic id name SCSI_id vdisk_id wwpn vdisk_UID 8 Atlantic 0 14 10000000C94C8C1C 6005076801A180E90800000000000060 8 Atlantic 1 22 10000000C94C8C1C 6005076801A180E90800000000000061

vdisk_name Atlantic0001 Atlantic0002

Chapter 5. Host configuration

175

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

8 Atlantic 2 23 10000000C94C8C1C 6005076801A180E90800000000000062 IBM_2145:ITSO-CLS2:admin>

Atlantic0003

We need to run cfgmgr on the AIX host to discover the new disks and enable us to use the disks: # cfgmgr -l fcs1 # cfgmgr -l fcs2 Alternatively, use the cfgmgr -vS command to check the complete system. This command will probe the devices sequentially across all FC adapters and attached disks; however, it is very time intensive: # cfgmgr -vS The raw SVC disk configuration of the AIX host system now appears as shown in Example 5-23. We can see the multiple MPIO FC 2145 devices, representing the SVC LUN.
Example 5-23 VDisks from SVC added with multiple different paths for each VDisk

# lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1D-08-02 1D-08-02 1D-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive MPIO FC 2145 MPIO FC 2145 MPIO FC 2145

To make a volumegroup (for example, itsoaixvg) to host the LUNs, we use the mkvg command passing the device as a parameter. This is shown in Example 5-24.
Example 5-24 Running the mkvg command

# mkvg -y itsoaixvg hdisk3 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg # mkvg -y itsoaixvg1 hdisk4 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg1 # mkvg -y itsoaixvg2 hdisk5 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg2 Now, by running the lspv command, we can see the disks and the assigned volume groups, as shown in Example 5-25.
Example 5-25 Showing the vpath assignment into the volume group

# lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 hdisk5

0009cdcaeb48d3a3 0009cdcac26dbb7c 0009cdcab5657239 0009cdca28b589f5 0009cdca28b87866 0009cdca28b8ad5b

rootvg rootvg rootvg itsoaixvg itsoaixvg1 itsoaixvg2

active active active active active active

176

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

In Example 5-26, we show that running the command lspv hdisk3 shows a more verbose output for one of the SVC LUNs.
Example 5-26 Verbose details of vpath1

# lspv hdisk3 PHYSICAL VOLUME: hdisk3 VOLUME GROUP: PV IDENTIFIER: 0009cdca28b589f5 VG IDENTIFIER 0009cdca00004c000000011b28b58ae2 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 511 (4088 megabytes) VG DESCRIPTORS: FREE PPs: 511 (4088 megabytes) HOT SPARE: USED PPs: 0 (0 megabytes) MAX REQUEST: FREE DISTRIBUTION: 103..102..102..102..102 USED DISTRIBUTION: 00..00..00..00..00 #

itsoaixvg

yes 0 2 no 256 kilobytes

5.5.10 Using SDDPCM


Within SDD, we are able to check the status of the adapters and devices now under SDDPCM control with the use of the pcmpath command set. In Example 5-27, we can see the status of both HBA cards as NORMAL and ACTIVE.
Example 5-27 SDDPCM commands used to check the availability of the adapters

# pcmpath query adapter Active Adapters :2 Adpt# 0 1 Name fscsi1 fscsi2 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 407 425 Errors 0 0 Paths 6 6 Active 6 6

From Example 5-28, we see detailed information about each MPIO device. The * next to the path numbers show which paths have been selected (used) by SDDPCM. This is because these are the two physical paths that connect to the preferred node of the I/O group of this SVC cluster. The remaining two paths within this MPIO device are only accessed in a failover scenario.
Example 5-28 SDDPCM commands used to check the availability of the devices

# pcmpath query device Total Devices : 3

DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000060 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 152 0 1* fscsi1/path1 OPEN NORMAL 48 0 2* fscsi2/path2 OPEN NORMAL 48 0
Chapter 5. Host configuration

177

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

fscsi2/path3

OPEN

NORMAL

160

DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000061 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi1/path0 OPEN NORMAL 37 0 1 fscsi1/path1 OPEN NORMAL 66 0 2 fscsi2/path2 OPEN NORMAL 71 0 3* fscsi2/path3 OPEN NORMAL 38 0 DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000062 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 66 0 1* fscsi1/path1 OPEN NORMAL 38 0 2* fscsi2/path2 OPEN NORMAL 38 0 3 fscsi2/path3 OPEN NORMAL 70 0 #

5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The volume group itsoaixvg is created using hdisk3, A logical volume is created using the volume group and then the file system is created, testlv1, and mounted on the mount point /testlv1, as seen in Example 5-29.
Example 5-29 Host system new volume group and file system configuration

# lsvg -o itsoaixvg2 itsoaixvg1 itsoaixvg rootvg # crfs -v jfs2 -g itsoaixvg -a size=3G File system created successfully. 3145428 kilobytes total disk space. New File System size is 6291456 # lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE LPs loglv00 jfs2log 1 fslv00 jfs2 384 #

-m /itsoaixvg -p rw -a agblksize=4096

PPs 1 384

PVs 1 1

LV STATE closed/syncd closed/syncd

MOUNT POINT N/A /itsoaixvg

5.5.12 Expanding an AIX volume


It is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Some operating systems such as AIX 5L Version 5.2 and higher versions can handle the volumes being expanded, even if the host has applications running. In the examples below, we show the procedure with AIX 5L V5.3 and SDD, but the procedure is the also the same using AIX V6 or SDDPCM. The volume group where the VDisk is assigned, if it is assigned to any, must not be a concurrent accessible volumegroup. A VDisk that is defined in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded, unless the mapping is 178
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

removed, which means the FlashCopy, Metro Mirror, or Global Mirror on that VDisk has to be stopped before it is possible to expand the VDisk. The following steps show how to expand a volume on an AIX host, where the volume is a VDisk from the SVC: 1. To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. Example 5-30 shows the VDisk Kanga0002 that we have allocated to our AIX server before we expand it. Here, the capacity is 5 GB, and the vdisk_UID is 60050768018301BF2800000000000016.
Example 5-30 Expanding a VDisk on AIX

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002 id 14 name Kanaga0002 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 5.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000016 throttling 0 preferred_node_id 2 fast_write_state not_empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 5.00GB real_capacity 5.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize

Chapter 5. Host configuration

179

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

2. To identify which vpath this VDisk is associated to on the AIX host, we use the SDD command datapath query device, as shown in Example 5-19 on page 173. Here we can see that the VDisk with vdisk_UID 60050768018301BF2800000000000016 is associated to vpath1 as the vdisk_UID matches the SERIAL field on the AIX host. 3. To see the size of the volume on the AIX host, we use the lspv command, as shown in Example 5-31. This shows that the volume size is 5112 MB, equal to 5 GB, as shown in Example 5-30 on page 179.
Example 5-31 Finding the size of the volume in AIX

#lspv vpath1 PHYSICAL VOLUME: vpath1 VOLUME GROUP: PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: FREE PPs: 0 (0 megabytes) HOT SPARE: USED PPs: 639 (5112 megabytes) MAX REQUEST: FREE DISTRIBUTION: 00..00..00..00..00 USED DISTRIBUTION: 128..128..127..128..128

itsoaixvg

yes 2 2 no 256 kilobytes

4. To expand the volume on the SVC, we use the command svctask expandvdisksize to increase the capacity on the VDisk. In Example 5-32, we expand the VDisk by 1GB.
Example 5-32 Expanding a VDisk

IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 1 -unit gb Kanaga0002 5. To check that the VDisk has been expanded, use the svcinfo lsvdisk command. Here we can see that the VDisk Kanaga0001 has been expanded to 6 GB in capacity (Example 5-33).
Example 5-33 Verifying that the VDisk has been expanded

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002 id 14 name Kanaga0002 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 6.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000016 throttling 0

180

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 6.00GB real_capacity 6.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize 6. AIX has not yet recognized a change in the capacity of the vpath1 volume, because no dynamic mechanism exists within the operating system to provide a configuration update communication. Therefore, to encourage AIX to recognize the extra capacity on the volume without stopping any applications, we use the chvg -g fc_source_vg command, where fc_source_vg is the name of the volumegroup which vpath1 belongs to. If AIX does not return anything, this means that the command was successful and the volume changes in this volume group have been saved. If AIX cannot see any changes in the volumes, it will return a message indicating this. 7. To verify that the size of vpath0 has changed, we use the lspv command again, as shown in Example 5-34.
Example 5-34 Verify that AIX can see the newly expanded VDisk

#lspv vpath1 PHYSICAL VOLUME: vpath1 VOLUME GROUP: PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: TOTAL PPs: 767 (6136 megabytes) VG DESCRIPTORS: FREE PPs: 128 (1024 megabytes) HOT SPARE: USED PPs: 639 (5112 megabytes) MAX REQUEST: FREE DISTRIBUTION: 00..00..00..00..128 USED DISTRIBUTION: 154..153..153..153..26

itsoaixvg

yes 2 2 no 256 kilobytes

Chapter 5. Host configuration

181

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Here we can see that the volume now has a size of 6136 MB, equal to 6 GB. After this we can expand the file systems in this volumegroup to use the new capacity.

5.5.13 Removing an SVC volume on AIX


Before we remove a VDisk assigned to an AIX host, we have to make sure that there is no data on it, and that no applications are dependent upon the volume. This is a standard AIX procedure. We move all data off the volume, remove the volume in the volumegroup, and delete the vpath and the hdisks associated to the vpath. Then we remove the vdiskhostmap on the SVC for that volume, and that VDisk is no longer needed. Then we delete it so the extents will be available when we create a new VDisk on the SVC.

5.5.14 Running SVC commands from an AIX host system


To issue CLI commands, you must install and prepare the SSH client system on the AIX host system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for Power Systems. For AIX V4.3.3, the software is available from the AIX toolbox for Linux applications. The AIX installation images from IBM developerWorks are available at this Web site: http://sourceforge.net/projects/openssh-aix Do the following steps: 1. To generate the key files on AIX, issue the following command: ssh-keygen -t rsa -f filename The -t parameter specifies the type of key to generate: rsa1, rsa2 or dsa. The value for rsa2 is just rsa, while for rsa1 the type needs to be rsa1. When creating the key to the SVC, use type rsa2. The -f parameter specifies the file names of the private and public keys on the AIX server (the public key gets the extension .pub after the file name). 2. Next, you have to install the public key on the SVC, which can be done by using the master console. Copy the public key to the master console, and install the key to the SVC, as described in Chapter 4, SVC initial configuration on page 101. 3. On the AIX server, make sure that the private key and the public key is in the .ssh directory, and in the home directory of the user. 4. To connect to the SVC and use a CLI session from the AIX host, issue the following command: ssh -l admin -i filename svc 5. You can also issue the commands directly on the AIX host, and this is useful when making scripts. To do this, add the SVC commands to the previous command. For example, to list the hosts defined on the SVC, enter the following command: ssh -l admin -i filename svc svcinfo lshost In this command, -l admin is the user on the SVC we will connect to, -i filename is the filename of the private key generated, and svc is the name or IP address of the SVC.

182

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

5.6 Windows-specific information


In the following sections, we detail specific information about the connection of Windows 2000 based hosts to the SVC environment.

5.6.1 Configuring Windows 2000, Windows 2003, and Windows 2008 hosts
This section provides an overview of the requirements for attaching the SVC to a host running the Windows 2000 Server, Windows 2003 Server, or Windows 2008 Server. Before you attach the SVC to your host, make sure that all requirements listed below are fulfilled: For Windows Server 2003 x64 Edition operating system, you must install the Hotfix from KB 908980. If you do not install it before operation, preferred pathing is not available. You can find the Hotfix at: http://support.microsoft.com/kb/908980 Check LUN limitations for your host system. Ensure that there are enough Fibre Channel adapters installed in the server to handle the total LUNs you want to attach.

5.6.2 Configuring Windows


To configure the Windows hosts, follow these steps: 1. Make sure that the latest OS Hotfixes are applied to your Microsoft server. 2. Use the latest firmware and driver levels on your host system. 3. Install the HBA or HBAs on the Windows server, as shown in 5.6.4, Host adapter installation and configuration on page 184. 4. Connect the Windows 2000/2003/2008 server FC Host adapters to the switches. 5. Configure the switches (zoning). 6. Install the FC Host adapter driver, as described in 5.6.3, Hardware lists, device driver, HBAs and firmware levels on page 183. 7. Configure the HBA for hosts running Windows, as described in 5.6.4, Host adapter installation and configuration on page 184. 8. Check the HBA driver readme for the required Windows registry settings, as described in 5.6.3, Hardware lists, device driver, HBAs and firmware levels on page 183. 9. Check the disk timeout on Microsoft Windows Server, as described in 5.6.5, Changing the disk timeout on Microsoft Windows Server on page 186. 10.Install and configure SDD/SDDDSM. 11.Restart the Windows 2000/2003/2008 host system. 12.Configure the host, VDisks, and host mapping in the SVC. 13.Use Rescan disk in Computer Management of the Windows server to discover the VDisks created on the SAN Volume Controller.

5.6.3 Hardware lists, device driver, HBAs and firmware levels


The latest information about supported hardware, device driver, and firmware is available at: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_Windows

Chapter 5. Host configuration

183

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

There you will also find the hardware list for supported host bus adapters and the driver levels for Windows. Check the supported firmware and driver level for your host bus adapter and follow the manufacturers instructions to upgrade the firmware and driver levels for each type of HBA. In most manufacturers driver readmes, you will find instructions for the Windows registry parameters that have to be set for the HBA driver. For the Emulex HBA driver, SDD requires the port driver, not the miniport port driver. For the QLogic HBA driver, SDDDSM requires the storport version of the miniport driver. For the QLogic HBA driver, SDD requires the scsiport version of the miniport driver.

5.6.4 Host adapter installation and configuration


Install the host adapter(s) into your system. Refer to the manufacturers instructions for installation and configuration of the HBAs. In IBM System x servers, the HBA should always be installed in the first slots. This means that if you install, for example, two HBAs and two network cards, the HBAs should be installed in slot 1 and slot 2, and the network cards can be installed in the remaining slots.

Configure the QLogic HBA for hosts running Windows


After you have installed the HBA in the server, and have applied the HBA firmware and device driver, you have to configure the HBA. To do this, perform the following steps: 1. Restart the server. 2. When you see the QLogic banner, press the Ctrl-Q keys to open the FAST!UTIL menu panel. 3. From the Select Host Adapter menu, select the Adapter Type QLA2xxx. 4. From the Fast!UTIL Options menu, select Configuration Settings. 5. From the Configuration Settings menu, click Host Adapter Settings. 6. From the Host Adapter Settings menu, select the following values: a. Host Adapter BIOS: Disabled b. Frame size: 2048 c. Loop Reset Delay: 5 (minimum) d. Adapter Hard Loop ID: Disabled e. Hard Loop ID: 0 f. Spinup Delay: Disabled g. Connection Options: 1 - point to point only h. Fibre Channel Tape Support: Disabled i. Data Rate: 2 7. Press the Esc key to return to the Configuration Settings menu. 8. From the Configuration Settings menu, select Advanced Adapter Settings. 9. From the Advanced Adapter Settings menu, set the following parameters: a. Execution throttle: 100 b. Luns per Target: 0 c. Enable LIP Reset: No d. Enable LIP Full Login: Yes

184

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

e. Enable Target Reset: No Note: If you are using subsystem device driver (SDD) lower than 1.6, set Enable Target Reset to Yes. f. Login Retry Count: 30 g. Port Down Retry Count: 15 h. Link Down Timeout: 30 i. Extended error logging: Disabled (might be enabled for debugging) j. RIO Operation Mode: 0 k. Interrupt Delay Timer: 0 10.Press Esc to return to the Configuration Settings menu. 11.Press Esc. 12.From the Configuration settings modified window, select Save changes. 13.From the Fast!UTIL Options menu, select Select Host Adapter if more than one QLogic adapter was installed in your system. 14.Select the other Host Adapter and repeat all steps from point 4 to 12. 15.You have to repeat this for all installed Qlogic adapters in your system. When you are done press Esc to exit the Qlogic BIOS and restart the server.

Configuring the Emulex HBA for hosts running Windows


After you have installed the Emulex HBA and driver, you must configure your HBA. For the Emulex HBA StorPort driver, accept the default settings and set topology to 1 (1 = F Port Fabric). For the Emulex HBA FC Port driver, use the default settings and change the parameters to those given in Table 5-1.
Table 5-1 FC Port driver changes Parameters Query name server for all N-ports (BrokenRSCN) LUN mapping (MapLuns) Automatic LUN mapping (MapLuns) Allow multiple paths to SCSI target (MultipleSCSIClaims) Scan in device ID order (ScanDeviceIDOrder) Translate queue full to busy (TransleteQueueFull) Retry timer (RetryTimer) Maximum number of LUNs (MaximumLun) Recommended Settings Enabled Enabled (1) Enabled (1) Enabled Disabled Enabled 2000 milliseconds Equal or greater than the number of the SVC LUNs that are available to the HBA

Note: The parameters shown correspond to the parameters in HBAnywhere.

Chapter 5. Host configuration

185

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.6.5 Changing the disk timeout on Microsoft Windows Server


The section describes how to change the disk I/O timeout value on Windows 2000, 2003, and 2008 Server operating systems. On your Windows Server hosts, change the disk I/O timeout value to 60 in the Windows registry, as follows: 1. In Windows, click the Start button and select Run. 2. In the dialog text box, type regedit and press Enter. 3. In the registry browsing tool, locate the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key. 4. Confirm that the value for the key is 60 (decimal value) and, if necessary, change the value to 60, as shown in Figure 5-6.

Figure 5-6 Regedit

5.6.6 SDD driver installation on Windows


At the time of writing, the SDD levels in Table 5-2 are supported.
Table 5-2 Currently supported SDD levels Windows operating system NT 4 2000 / 2003 SP2 (32-bit) / 2003 SP2 (IA-64) 2000 with MSCS and Veritas Volume Manager / 2003 SP2 (32-bit) with MSCS and Veritas Volume Manager SDD level 1.5.1.1 1.6.3.0-2 Not available

See the following Web site for the latest information about SDD for Windows: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&lang=en Note: We recommend that you use SDD only on existing systems where you do not want to change from SDD to SDDDSM. New operating systems will only be supported with SDDDSM.

186

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Before installing the SDD driver, the HBA driver has to be installed on your system. SDD requires the HBA SCSI port driver. After downloading the appropriate version of SDD from the Web site, extract the file and run setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-7) to install the driver.

Figure 5-7 Confirm SDD installation

After the setup has completed, answer Y again to reboot your system (Figure 5-8).

Figure 5-8 Reboot system after installation

To check if your SDD installation is complete, open the Windows Device Manager, expand SCSI and RAID Controllers, right-click Subsystem Device Driver Management, and click Properties (see Figure 5-9).

Figure 5-9 Subsystem Device Driver Management

Chapter 5. Host configuration

187

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

The Subsystem Device Driver Management Properties window will appear. Select the Driver tab and make sure that you have installed the correct driver version (see Figure 5-10).

Figure 5-10 Subsystem Device Driver Management Properties Driver tab

5.6.7 SDDDSM driver installation on Windows


The following sections show how to install the SDDDSM driver on Windows.

Windows 2003, 2008, and MPIO


Microsoft Multi Path Input Output (MPIO) solutions are designed to work in conjunction with device specific modules (DSMs) written by vendors, but the MPIO driver package does not, by itself, form a complete solution. This joint solution allows the storage vendors to design device specific solutions that are tightly integrated with the Windows operating system. MPIO is not shipped with the Windows operating system; storage vendors must pack the MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is the IBM multipath IO solution based on Microsoft MPIO technology; it is a device specific module specifically designed to support IBM storage devices on Windows 2003 and 2008 servers. The intention of MPIO is to get a better integration of multipath storage solution with the operating system, and allows the use of multipaths in the SAN infrastructure during the boot process for SAN boot hosts.

Subsystem Device Driver Device Specific Module (SDDDSM) for SVC


Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for the SVC device for the Windows Server 2003 and 2008 operating systems. SDDDSM is the IBM multipath IO solution based on Microsoft MPIO technology, and it is a device specific module specifically designed to support IBM storage devices. Together with MPIO, it is designed to support the multipath configuration environments in the IBM System Storage SAN Volume Controller. It resides in a host system with the native disk device driver and provides the following functions: Enhanced data availability Dynamic I/O load-balancing across multiple paths

188

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Automatic path failover protection Concurrent download of licensed internal code Path-selection policies for the host system No SDDDSM support for Windows 2000 For the HBA driver, SDDDSM requires the StorPort version of HBA miniport driver Table 5-3 shows, at the time of writing, the supported SDDDSM driver levels.
Table 5-3 Currently supported SDDDSM driver levels Windows operating system 2003 SP2(32bit) / 2003 SP2(x64) 2008 (32bit) / 2008 (x64) SDD level 2.2.0.0-11 2.2.0.0-11

To check which levels are available, go to the Web site: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&lang=en#WindowsSDDDSM To download SDDDSM, go to the Web site: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S40 00350&loc=en_US&cs=utf-8&lang=en The installation procedure for SDDDSM and SDD are the same, but remember that you have to use the StorPort HBA driver instead of the SCSI driver. The SDD installation is described in 5.6.6, SDD driver installation on Windows on page 186. After completing the installation, you will now see the Microsoft MPIO in device manager (Figure 5-11).

Figure 5-11 Windows Device Manager - MPIO

The SDDDSM installation for Windows 2008 is described in 5.8, Example configuration of attaching an SVC to a Windows 2008 host on page 199.

Chapter 5. Host configuration

189

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.7 Discovering the assigned VDisk in Windows 2000 / 2003


In this section, we describe how to discover assigned VDisks in Windows 2000 and Windows 2003. The screen captures will show a Windows 2003 host with SDDDSM installed, but discovering the disks in Windows 2000 or with SDD is the same procedure. Before adding a new volume from the SAN Volume Controller, the Windows 2003 host system had the configuration shown in Figure 5-12, with only local disks.

Figure 5-12 Windows 2003 host system before adding a new volume from SVC

We can check that the WWPN is logged into the SAN Volume Controller for the host Senegal by entering the following command (Example 5-35): svcinfo lshost Senegal
Example 5-35 Host info - Senegal

IBM_2145:ITSO-CLS2:admin>svcinfo lshost Senegal id 1 name Senegal port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89B9C0 node_logged_in_count 2 state active WWPN 210000E08B89CCC2 node_logged_in_count 2 state active The configuration of the host Senegal, the VDisk ,Senegal_bas0001, and the mapping between the host and the VDisk are defined in the SAN Volume Controller, as described in Example 5-36 on page 191. In our example, the VDisk Senegal_bas0002 and Senegal_bas003 have the same configuration as VDisk Senegal_bas0001.

190

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Example 5-36 VDisk mapping - Senegal

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 10.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize

Chapter 5. Host configuration

191

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

We can also find the serial number of the VDisks by entering the following command (Example 5-37): svcinfo lsvdiskhostmap Senegal_bas0001
Example 5-37 VDisk serial number - Senegal_bas0001

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdiskhostmap Senegal_bas0001 id name SCSI_id host_id host_name wwpn vdisk_UID 7 Senegal_bas0001 0 1 Senegal 210000E08B89B9C0 6005076801A180E9080000000000000F 7 Senegal_bas0001 0 1 Senegal 210000E08B89CCC2 6005076801A180E9080000000000000F After installing the necessary drivers and the rescan disks operation completes, the new disks are found in the Computer Management window, as shown in Figure 5-13.

Figure 5-13 Windows 2003 host system with three new volumes from SVC

In Windows Device Manager, the disks are shown as IBM 2145 SCSI Disk Device (Figure 5-14 on page 193). The number of IBM 2145 SCSI Disk Devices that you see is equal to: (# of VDisks) x (# of paths per IO group per HBA) x (# of HBAs) The IBM 2145 Multi-Path Disk Devices are the devices created by the multipath driver (Figure 5-14 on page 193). The number of these devices are equal to the VDisks presented to the host.

192

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-14 Windows 2003 Device Manager with assigned VDisks

When following the SAN zoning recommendation, this gives us, for one VDisk and a host with two HBAs: (# of VDisk) x (# of paths per IO group per HBA) x (# of HBAs) = 1 x 2 x 2 = 4 paths You can check if all paths are available if you select Start All Programs Subsystem Device Driver (DSM) Subsystem Device Driver (DSM). The SDD (DSM) command-line interface will appear. Enter the following command to see which paths are available to your system (Example 5-38):
Example 5-38 Datapath query device

Microsoft Windows [Version 5.2.3790] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 47 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 162 0

Chapter 5. Host configuration

193

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

2 3

Scsi Port3 Bus0/Disk2 Part0 Scsi Port3 Bus0/Disk2 Part0

OPEN OPEN

NORMAL NORMAL

155 0

0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 51 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 25 0 C:\Program Files\IBM\SDDDSM> Note: All Path States have to be OPEN. The path state can be OPEN or CLOSE. If one path is CLOSE, it means that the system is missing a path that it saw during start up. If you restart your system, the CLOSE paths are removed from this view.

5.7.1 Extending a Windows 2000 or 2003 volume


It is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Some operating systems, such as Windows 2000 and Windows 2003, can handle the volumes being expanded even if the host has applications running. A VDisk that is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded unless the mapping is removed, which means the FlashCopy, Metro Mirror, or Global Mirror on that VDisk has to be stopped before it is possible to expand the VDisk. Important: For VDisk expansion to work on Windows 2000, apply Windows 2000 Hotfix Q327020, which is available from the Microsoft Knowledge Base at: http://support.microsoft.com/kb/327020 If you want to expand a logical drive in a extended partition in Windows 2003, apply the Hotfix from KB 841650, which is available from the Microsoft Knowledge Base at: http://support.microsoft.com/kb/841650/en-us Use the updated Diskpart version for Windows 2003, which is available from the Microsoft Knowledge Base at: http://support.microsoft.com/kb/923076/en-us If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut down all nodes except one, and that applications in the resource that use the volume that is going to be expanded are stopped before expanding the volume. Applications running in other resources can continue. After expanding the volume, start the application and the resource, and then restart the other nodes in the MSCS.

194

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

To expand a volume in use on Windows 2000 and Windows 2003, we used Diskpart. The Diskpart tool is part of Windows 2003; for other Windows versions, you can download it free of charge from Microsoft. Diskpart is a tool developed by Microsoft to ease administration of storage. It is a command-line interface where you can manage disks, partitions, and volumes, by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting get more detailed information, create partitions, extend volumes, and more. For more information, see the Microsoft Web site: http://www.microsoft.com or http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech An example of how to expand a volume on a Windows 2003 host, where the volume is a VDisk from the SVC, is shown in the following discussion. To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. This gives this information for the Senegal_bas0001 before expanding the VDisk (Example 5-36 on page 191). Here we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find what vpath this VDisk is on the Windows 2003 host, we use the SDD command, datapath query device, on the Windows host (Figure 5-15). Here we can see that the Serial 6005076801A180E9080000000000000F of Disk1 on the Windows host (Figure 5-15) matches the vdisk ID of Senegal_bas0001 (Example 5-36 on page 191). To see the size of the volume on the Windows host, we use Disk Manager, as shown in Figure 5-15.

Figure 5-15 Windows 2003 - Disk Management

Chapter 5. Host configuration

195

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

This shows that the volume size is 10 GB. To expand the volume on the SVC, we use the command svctask expandvdisksize to increase the capacity on the VDisk. In this example, we expand the VDisk by 1 GB (Example 5-39).
Example 5-39 svctask expandvdisksize command

IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 11.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 11.00GB real_capacity 11.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To check that the VDisk has been expanded, we use the command svctask expandvdisksize. In Example 5-39, we can see that the VDisk Senegal_bas0001 has been expanded to 11 GB in capacity. 196
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

After performing a Disk Rescan in Windows, you will see the new unallocated space in Windows Disk Management, as shown in Figure 5-16.

Figure 5-16 Expanded volume in Disk Manager

This shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity available for the file system, use the following commands, as shown in Example 5-40. diskpart list volume select volume detail volume extend Starts DiskPart in a DOS prompt. Shows you all available volumes. Selects the volume to expand. Displays details for the selected volume, including the unallocated capacity. Extends the volume to the available unallocated space.

Example 5-40 Using Diskpart

C:\>diskpart Microsoft DiskPart version 5.2.3790.3959 Copyright (C) 1999-2001 Microsoft Corporation. On computer: SENEGAL DISKPART> list volume Volume ### ---------Volume 0 Volume 1 Volume 2 Ltr --C S D Label ----------SVC_Senegal Fs ----NTFS NTFS Type ---------Partition Partition DVD-ROM Size ------75 GB 10 GB 0 B Status --------Healthy Healthy Healthy Info -------System

DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> detail volume

Chapter 5. Host configuration

197

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Disk ### -------* Disk 1

Status ---------Online

Size ------11 GB

Free ------1020 MB

Dyn ---

Gpt ---

Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No DISKPART> extend DiskPart successfully extended the volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------0 B Dyn --Gpt ---

Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No After extending the volume, the command detail volume shows that there is no free capacity on the volume anymore. The list volume command shows the file system size. The disk management window also shows the new disk size, as shown in Figure 5-17.

Figure 5-17 Disk Management after extending disk

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded by expanding the underlying SVC VDisk. The new space will appear as unallocated space at the end of the disk.

198

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

In this case, you do not need to use the DiskPart Tool, just Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases. Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data, due to a different position of the LBA on the disks.

5.8 Example configuration of attaching an SVC to a Windows 2008 host


This section describes an example configuration that shows the attachment of a Windows 2008 host system to the SVC. More details about Windows 2008 and SVC are covered in 5.6, Windows-specific information on page 183.

5.8.1 Installing SDDDSM on a Windows 2008 host


Download the HBA driver and the SDDDSM Package and copy it to your host system. Information about the recommended SDDDSM Package is listed in 5.6.7, SDDDSM driver installation on Windows on page 188. HBA driver details are listed in 5.6.3, Hardware lists, device driver, HBAs and firmware levels on page 183. We will perform the steps described in 5.6.2, Configuring Windows on page 183 to achieve this task. As a prerequisite for this example, we have already performed steps 1 to 5 for the hardware installation, SAN configuration is done, and hotfixes are applied. The Disk timeout value is set to 60 seconds (see 5.6.5, Changing the disk timeout on Microsoft Windows Server on page 186) and we will start with the driver installation.

Installing the HBA driver


1. Extract the Qlogic driver package to your hard drive. 2. Select Start Run. 3. Enter devmgmt.msc, click OK, and the Device Manager will appear. 4. Expand the Storage Controllers.

Chapter 5. Host configuration

199

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5. Right-click the HBA and select Update driver Software. (Figure 5-18).

Figure 5-18 Windows 2008 driver update

6. Click Browse my computer for driver software (Figure 5-19).

Figure 5-19 Windows 2008 driver update

7. Enter the path to the extracted QLogic driver and click Next (Figure 5-20 on page 201).

200

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-20 Windows 2008 driver update

8. Windows installs the driver (Figure 5-21).

Figure 5-21 Windows 2008 driver installation

Chapter 5. Host configuration

201

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

9. When the driver update is complete, click Close to exit the wizard (Figure 5-22).

Figure 5-22 Windows 2008 driver installation

10.Repeat steps 1 to 8 for all HBAs installed in the system.

5.8.2 Installing SDDDSM


To install the SDDDSM driver on your system, perform the following steps: 1. Extract the SDDDSM driver package to a folder on your hard drive. 2. Open the folder with the extracted files. 3. Run setup.exe and a DOS command prompt will appear. 4. Type Y and press Enter to install SDDDSM (Figure 5-23).

Figure 5-23 Installing SDDDSM

5. After the SDDDSM Setup is finished, type Y and press Enter to restart your system. After the reboot, the SDDDSM installation is complete. You can check this in Device Manager, as the SDDDSM device will appear (Figure 5-24 on page 203), and the SDDDSM tools will have been installed (Figure 5-25 on page 203).

202

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-24 SDDDSM installation

Figure 5-25 SDDDSM installation

Chapter 5. Host configuration

203

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.8.3 Attaching SVC VDisks to Windows 2008


Create the VDisks on the SVC and map them to the Windows 2008 host. In this example, we have mapped three SVC disks to the Windows 2008 host named Diomede, as shown in Example 5-41.
Example 5-41 SVC host mapping to host Diomede

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede id name SCSI_id vdisk_id vdisk_name wwpn 0 Diomede 0 20 Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B 0 Diomede 1 21 Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C 0 Diomede 2 22 Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D

vdisk_UID

Perform the following steps to use the devices on your Windows 2008 host: 1. Click Start and Run. 2. Enter diskmgmt.msc and click OK and the Disk Management window will appear. 3. Select Action and click Rescan Disks (Figure 5-26).

Figure 5-26 Windows 2008 - Rescan disks

4. The SVC disks will now appear in the Disk Management window (Figure 5-27 on page 205).

204

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-27 Windows 2008 Disk Management window

After you have assigned the SVC disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 5-28).

Figure 5-28 Windows 2008 Device Manager

Chapter 5. Host configuration

205

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5. To check that the disks are available, select Start All Programs Subsystem Device Driver DSM and click Subsystem Device Driver DSM. (Figure 5-29). The SDDDSM Command Line Utility will appear.

Figure 5-29 Windows 2008 Subsystem Device Driver DSM utility

6. Enter datapath query device and press Enter (Example 5-42). This command will display all disks and the available paths, including their state.
Example 5-42 Windows 2008 SDDDSM command-line utility

Microsoft Windows [Version 6.0.6001] Copyright (c) 2006 Microsoft Corporation.

All rights reserved.

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002B ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002C ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 206
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

2 3

Scsi Port3 Bus0/Disk2 Part0 Scsi Port3 Bus0/Disk2 Part0

OPEN OPEN

NORMAL NORMAL

0 1517

0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002D ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 C:\Program Files\IBM\SDDDSM> Note: When following the SAN zoning recommendation, this gives us, using one VDisk and a host with two HBAs, (# of VDisk) x (# of paths per IO group per HBA) x (# of HBAs) = 1 x 2 x 2 = four paths. 7. Right-click the disk in Disk Management and select Online to place the disk online (Figure 5-30).

Figure 5-30 Windows 2008 - place disk online

8. Repeat step 7 for all of your attached SVC disks. 9. Right-click one disk again and select Initialize Disk (Figure 5-31).

Figure 5-31 Windows 2008 - Initialize Disk

Chapter 5. Host configuration

207

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

10.Mark all the disks you want to initialize and click OK (Figure 5-32).

Figure 5-32 Windows 2008 - Initialize Disk

11.Right-click the unallocated disk space and select New Simple Volume (Figure 5-33).

Figure 5-33 Windows 2008 - New Simple Volume

12.The New Simple Volume Wizard appears. Click Next. 13.Enter a disk size and click Next (Figure 5-34).

Figure 5-34 Windows 2008 - New Simple Volume

14.Assign a drive letter and click Next (Figure 5-35 on page 209). 208
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-35 Windows 2008 - New Simple Volume

15.Enter a volume label and click Next (Figure 5-36).

Figure 5-36 Windows 2008 - New Simple Volume

Chapter 5. Host configuration

209

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

16.Click Finish and repeat this step for every SVC disk on your host system (Figure 5-37).

Figure 5-37 Windows 2008 - Disk Management

5.8.4 Extending a Windows 2008 Volume


Using SVC and Windows 2008 gives you the ability to extend volumes while they are in use. The steps to extend a volume are described in 5.7.1, Extending a Windows 2000 or 2003 volume on page 194. Windows 2008 also uses also the DiskPart utility to extend volumes. To start it, select Start Run and enter DiskPart. The DiskPart utility will appear. The procedure is exactly the same as in Windows 2003. Follow the Windows 2003 description to extend your volume.

5.8.5 Removing a disk on Windows


When we want to remove a disk from Windows, and the disk is an SVC VDisk, we need to follow the standard Windows procedure to make sure that there is no data we wish to preserve on the disk, that no applications are using the disk, and that no I/O is going to the disk. After completing this procedure, we remove the VDisk mapping on the SVC. Here we need to make sure we are removing the correct VDisk, and to check this we use SDD to find the serial number for the disk, and on the SVC we use lshostvdiskmap to find the VDisk name and number. We also check that the SDD Serial number on the host matches the UID on the SVC for the VDisk. When the VDisk mapping is removed, we will do a rescan for the disk, Disk Management on the server will remove the disk, and the vpath will go into the status of CLOSE on the server. We can check this by using the SDD command datapath query device, but the vpath that is closed will first be removed after a reboot of the server. In the following sequence of examples, we show how we can remove an SVC VDisk from a Windows server. We show it on a Windows 2003 operating system, but the steps also apply to Windows 2000 and 2008.

210

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-15 on page 195 shows the Disk Manager before removing the disk. We will remove Disk 1(S:). To find the correct VDisk information, we find the Serial/UID number using SDD (Example 5-43).
Example 5-43 Removing SVC disk from Windows server

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0

Chapter 5. Host configuration

211

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Knowing the Serial/UID of the VDisk and the host name Senegal, we find the VDisk mapping to remove using the lshostvdiskmap command on the SVC, and after this we remove the actual VDisk mapping (Example 5-44).
Example 5-44 Finding and removing the VDisk mapping

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

vdisk_UID

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

vdisk_UID

Here we can see that the VDisk is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been removed, as shown in Figure 5-38.

Figure 5-38 Disk Management - Disk has been removed

SDD also shows us that the status for all paths to Disk1 has changed to CLOSE because the disk is not available (Example 5-45 on page 213).

212

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Example 5-45 SDD - closed path

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0 The disk (Disk1) is now removed from the server. However, to remove the SDD information of the disk, we need to reboot the server, but this can wait until a more suitable time.

5.9 Using the SVC CLI from a Windows host


To issue CLI commands, we must install and prepare the SSH client system on the Windows host system. We can install the PuTTY SSH client software on a Windows host using the PuTTY Installation program. This is in the SSHClient\PuTTY directory of the SAN Volume Controller Console CD-ROM, or you can download PuTTY from the following Web site: http://www.chiark.greenend.org.uk/~sgtatham/putty/ The following Web site offers SSH client alternatives for Windows: http://www.openssh.com/windows.html Cygwin software has an option to install an OpenSSH client. You can download Cygwin from the following Web site: http://www.cygwin.com/

Chapter 5. Host configuration

213

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

More information about the CLI is covered in Chapter 7, SVC operations using the CLI on page 337.

5.10 Microsoft Volume Shadow Copy


The SAN Volume Controller provides support for the Microsoft Volume Shadow Copy Service. The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host volume while the volume is mounted and files are in use. In this section, we discuss how to install the Microsoft Volume Copy Shadow Service. The following operating system versions are supported: Windows 2003 Standard Server Edition, 32-bit and 64-bit (x64) versions Windows 2003 Enterprise Edition, 32-bit and 64-bit (x64) versions Windows 2003 Standard Server R2 Edition, 32-bit and 64-bit (x64) versions Windows 2003 Enterprise R2 Edition, 32-bit and 64-bit (x64) versions Windows Server 2008 Standard Windows Server 2008 Enterprise The following components are used to provide support for the service: SAN Volume Controller SAN Volume Controller master console IBM System Storage hardware provider, known as the IBM System Storage Support for Microsoft Volume Shadow Copy Service Microsoft Volume Shadow Copy Service The IBM System Storage provider is installed on the Windows host. To provide the point-in-time shadow copy, the components complete the following process: 1. A backup application on the Windows host initiates a snapshot backup. 2. The Volume Shadow Copy Service notifies the IBM System Storage hardware provider that a copy is needed. 3. The SAN Volume Controller prepares the volume for a snapshot. 4. The Volume Shadow Copy Service quiesces the software applications that are writing data on the host and flushes file system buffers to prepare for a copy. 5. The SAN Volume Controller creates the shadow copy using the FlashCopy Service. 6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can resume and notifies the backup application that the backup was successful. The Volume Shadow Copy Service maintains a free pool of virtual disks (VDisks) for use as a FlashCopy target and a reserved pool of VDisks. These pools are implemented as virtual host systems on the SAN Volume Controller.

214

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

5.10.1 Installation overview


The steps for implementing the IBM System Storage Support for Microsoft Volume Shadow Copy Service must be completed in the correct sequence. Before you begin, you must have experience with, or knowledge of, administering a Windows operating system. And you must also have experience with, or knowledge of, administering a SAN Volume Controller. You will need to complete the following tasks: Verify that the system requirements are met. Install the SAN Volume Controller Console if it is not already installed. Install the IBM System Storage hardware provider. Verify the installation. Create a free pool of volumes and a reserved pool of volumes on the SAN Volume Controller.

5.10.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software on the Windows operating system: SAN Volume Controller and Master Console Version 2.1.0 or later with FlashCopy enabled. You must install the SAN Volume Controller Console before you install the IBM System Storage Hardware provider. IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software Version 3.1 or later.

5.10.3 Installing the IBM System Storage hardware provider


This section includes the steps to install the IBM System Storage hardware provider on a Windows server. You must satisfy all of the system requirements before starting the installation. During the installation, you will be prompted to enter information about the SAN Volume Controller master console, including the location of the truststore file. The truststore file is generated during the installation of the master console. You must copy this file to a location that is accessible to the IBM System Storage hardware provider on the Windows server. When the installation is complete, the installation program might prompt you to restart the system. Complete the following steps to install the IBM System Storage hardware provider on the Windows server: 1. Download the installation program files from the IBM Web site, and place a copy on the Windows server where you will install the IBM System Storage hardware provider: http://www-1.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH &dc=D400&uid=ssg1S4000663&loc=en_US&cs=utf-8&lang=en 2. Log on to the Windows server as an administrator and navigate to the directory where the installation program is located. 3. Run the installation program by double-clicking IBMVSS.exe.

Chapter 5. Host configuration

215

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

4. The Welcome window opens, as shown in Figure 5-39. Click Next to continue with the installation. You can click Cancel at any time to exit the installation. To move back to previous windows while using the wizard, click Back.

Figure 5-39 IBM System Storage Support for Microsoft Volume Shadow Copy installation

5. The License Agreement window opens (Figure 5-40). Read the license agreement information. Select whether you accept the terms of the license agreement, and click Next. If you do not accept, it means that you cannot continue with the installation.

Figure 5-40 IBM System Storage Support for Microsoft Volume Shadow Copy installation

216

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

6. The Choose Destination Location window opens (Figure 5-41). Click Next to accept the default directory where the setup program will install the files, or click Change to select a different directory. Click Next.

Figure 5-41 IBM System Storage Support for Microsoft Volume Shadow Copy installation

7. Click Install to begin the installation (Figure 5-42):

Figure 5-42 IBM System Storage Support for Microsoft Volume Shadow Copy installation

Chapter 5. Host configuration

217

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

8. From the next window, select the required CIM server, or select Enter the CIM Server address manually, and click Next (Figure 5-43).

Figure 5-43 IBM System Storage Support for Microsoft Volume Shadow Copy installation

9. The Enter CIM Server Details window appears. Enter the following information in the fields (Figure 5-44): a. In the CIM Server Address field, type the name of the server where the SAN Volume Controller Console is installed. b. In the CIM User field, type the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the server where the SAN Volume Controller Console is installed. c. In the CIM Password field, type the password for the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the SAN Volume Controller Console d. Click Next.

Figure 5-44 IBM System Storage Support for Microsoft Volume Shadow Copy installation

10.In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to restart the system (Figure 5-45 on page 219).

218

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Figure 5-45 IBM System Storage Support for Microsoft Volume Shadow Copy installation

Note: If these settings change after installation, you can use the ibmvcfg.exe tool to update Microsoft Volume Shadow Copy and Virtual Disk Services software with the new settings. If you do not have the CIM agent server, port, or user information, contact your CIM agent administrator.

5.10.4 Verifying the installation


Perform the following steps to verify the installation. 1. Select Start All Programs Administrative Tools Services from the Windows server task bar. 2. Ensure that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software appears and that Status is set to Started and Startup Type is set to Automatic. 3. Open a command prompt window and issue the following command: vssadmin list providers

Chapter 5. Host configuration

219

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

This command ensures that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider (Example 5-46).
Example 5-46 Microsoft Software Shadow copy provider

C:\Documents and Settings\Administrator>vssadmin list providers vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001 Microsoft Corp. Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7 Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware Provider' Provider type: Hardware Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b} Version: 3.1.0.1108 If you are able to successfully perform all of these verification tasks, the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was successfully installed on the Windows server.

5.10.5 Creating the free and reserved pools of volumes


The IBM System Storage hardware provider maintains a free and a reserved pool of volumes. Because these objects do not exist on the SAN Volume Controller, the free and reserved pool of volumes are implemented as virtual host systems. You must define these two virtual host systems on the SAN Volume Controller. When a shadow copy is created, the IBM System Storage hardware provider selects a volume in the free pool, assigns it to the reserved pool, and then removes it from the free pool. This protects the volume from being overwritten by other Volume Shadow Copy Service users. To successfully perform a Volume Shadow Copy Service operation, there must be enough virtual disks (VDisks) mapped to the free pool. The VDisks must be the same size as the source VDisks. Use the SAN Volume Controller Console or the SAN Volume Controller command-line interface (CLI) to perform the following steps: 1. Create a host for the free pool of VDisks. You can use the default name VSS_FREE or specify a different name. Associate the host with the worldwide port name (WWPN) 5000000000000000 (15 zeroes) (Example 5-47).
Example 5-47 mkhost for free pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000 -force Host, id [2], successfully created 2. Create a virtual host for the reserved pool of volumes. You can use the default name VSS_RESERVED or specify a different name. Associate the host with the WWPN 5000000000000001 (14 zeroes) (Example 5-48 on page 221).

220

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Example 5-48 mkhost for reserved pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn 5000000000000001 -force Host, id [3], successfully created 3. Map the logical units (VDisks) to the free pool of volumes. The VDisks cannot be mapped to any other hosts. If you already have VDisks created for the free pool of volumes, you must assign the VDisks to the free pool. 4. Create VDisk-to-host mappings between the VDisks selected in step 3 and the VSS_FREE host to add the VDisks to the free pool. Alternatively, you can use the ibmvcfg add command to add VDisks to the free pool (Example 5-49).
Example 5-49 Host mappings

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002 Virtual Disk to Host map, id [1], successfully created 5. Verify that the VDisks have been mapped. If you do not use the default WWPNs 5000000000000000 and 5000000000000001, you must configure the IBM System Storage hardware provider with the WWPNs (Example 5-50).
Example 5-50 Verify hosts

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E90800000000000012 2 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013

5.10.6 Changing the configuration parameters


You can change the parameters that you defined when you installed the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software. Therefore, you must use the ibmvcfg.util. It is a command-line utility located in C:\Program Files\IBM\Hardware Provider for VSS-VDS (Example 5-51).
Example 5-51 ibmvcfg.util help

C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe IBM System Storage VSS Provider Configuration Tool Commands ---------------------------------------ibmvcfg.exe <command> <command arguments> Commands: /h | /help | -? | /? showcfg listvols <all|free|unassigned> add <volume esrial number list> (separated by spaces) rem <volume serial number list> (separated by spaces) Configuration: set user <CIMOM user name>
Chapter 5. Host configuration

221

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

set set set set set set set set set set set set set

password <CIMOM password> trace [0-7] trustpassword <trustpassword> truststore <truststore location> usingSSL <YES | NO> vssFreeInitiator <WWPN> vssReservedInitiator <WWPN> FlashCopyVer <1 | 2> (only applies to ESS) cimomPort <PORTNUM> cimomHost <Hostname> namespace <Namespace> targetSVC <svc_cluster_ip> backgroundCopy <0-100>

The available commands are shown in Table 5-4.


Table 5-4 ibmvcfg.util commands Command Description Lists the current settings. Sets the user name to access the SAN Volume Controller Console. Sets the password of the user name that will access the SAN Volume Controller Console. Specifies the IP address of the SAN Volume Controller on which the VDisks are located when VDisks are moved to and from the free pool with the ibmvcfg add and ibmvcfg rem commands. The IP address is overridden if you use the -s flag with the ibmvcfg add and ibmvcfg rem commands. Sets the background copy rate for FlashCopy. Specifies whether to use Secure Sockets Layer protocol to connect to the SAN Volume Controller Console. Specifies the SAN Volume Controller Console port number. The default value is 5999. Sets the name of the server where the SAN Volume Controller Console is installed. Specifies the namespace value that master console is using. The default value is \root\ibm. Example

ibmvcfg showcfg ibmvcfg set username <username> ibmvcfg set password <password> ibmvcfg set targetSVC <ipaddress>

ibmvcfg showcfg ibmvcfg set username Dan

ibmvcfg set password mypassword set targetSVC 9.43.86.120

set backgroundCopy ibmvcfg set usingSSL

set backgroundCopy 80 ibmvcfg set usingSSL yes

ibmvcfg set cimomPort <portnum>

ibmvcfg set cimomPort 5999

ibmvcfg set cimomHost <server name> ibmvcfg set namespace <namespace>

ibmvcfg set cimomHost cimomserver ibmvcfg set namespace \root\ibm

222

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Command

Description Specifies the WWPN of the host. The default value is 5000000000000000. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000000. Specifies the WWPN of the host. The default value is 5000000000000001. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000001. Lists all virtual disks (VDisks), including information about size, location, and VDisk to host mappings. Lists all VDisks, including information about size, location, and VDisk to host mappings. Lists the volumes that are currently in the free pool. Lists the volumes that are currently not mapped to any hosts. Adds one or more volumes to the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the VDisks are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command. Removes one or more volumes from the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the VDisks are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.

Example

ibmvcfg set vssFreeInitiator <WWPN>

ibmvcfg set vssFreeInitiator 5000000000000000

ibmvcfg set vssReservedInitiator <WWPN>

ibmvcfg set vssFreeInitiator 5000000000000001

ibmvcfg listvols

ibmvcfg listvols

ibmvcfg listvols all

ibmvcfg listvols all

ibmvcfg listvols free ibmvcfg listvols unassigned ibmvcfg add -s ipaddress

ibmvcfg listvols free ibmvcfg listvols unassigned ibmvcfg add vdisk12 ibmvcfg add 600507 68018700035000000 0000000BA -s 66.150.210.141

ibmvcfg rem -s ipaddress

ibmvcfg rem vdisk12 ibmvcfg rem 600507 68018700035000000 0000000BA -s 66.150.210.141

Chapter 5. Host configuration

223

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.11 Linux (on Intel) specific information


The following sections details specific information pertaining to the connection of Linux on Intel-based hosts to the SVC environment.

5.11.1 Configuring the Linux host


Follow these steps to configure the Linux host: 1. Use the latest firmware levels on your host system. 2. Install the HBA or HBAs on the Linux server, as described in 5.6.4, Host adapter installation and configuration on page 184. 3. Install the supported HBA driver / firmware and upgrade the kernel if required, as described in 5.11.2, Configuration information on page 224. 4. Connect the Linux server FC Host adapters to the switches. 5. Configure the switches (zoning) if needed. 6. Install SDD for Linux, as described in 5.11.5, Multipathing in Linux on page 225. 7. Configure the host, VDisks, and host mapping in the SAN Volume Controller. 8. Rescan for LUNs on the Linux server to discover the VDisks created on the SVC.

5.11.2 Configuration information


The SAN Volume Controller supports hosts that run the following Linux distributions: Red Hat Enterprise Linux SUSE Linux Enterprise Server For the latest information, always refer to this site: http://www.ibm.com/storage/support/2145 For SVC Version 4.3, the following support information was available at the time of writing: Software supported levels: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278 Hardware supported levels: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277 There you will find the hardware list for supported host bus adapters and device driver levels for Windows. Check the supported firmware and driver level for your host bus adapter and follow the manufactures instructions to upgrade the firmware and driver levels for each type of HBA.

5.11.3 Disabling automatic Linux system updates


Many Linux distributions give you the ability to configure your systems for automatic system updates. Red Hat provides this ability in the form of a program called up2date, while Novell SUSE provides the YaST Online Update utility. These features periodically query for updates available for each host and can be configured to automatically install any new updates that they find.

224

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Often, the automatic update process also upgrades the system to the latest kernel level. If this is the case, hosts running SDD should consider turning off the automatic update of kernel levels. Some drivers supplied by IBM, such as SDD, are dependent on a specific kernel and will cease to function on a new kernel. Similarly, host bus adapter (HBA) drivers need to be compiled against specific kernels in order to function optimally. By allowing automatic updates of the kernel, you risk impacting your host systems unexpectedly.

5.11.4 Setting queue depth with QLogic HBAs


The queue depth is the number of I/O operations that can be run in parallel on a device. Configure your host running the Linux operating system using the formula specified in 5.16, Calculating the queue depth on page 251. Perform the following steps to set the maximum queue depth: 1. Add the following line to the /etc/modules.conf file: a. For the 2.4 kernel (SUSE Linux Enterprise Server 8 or Red Hat Enterprise Linux): options qla2300 ql2xfailover=0 ql2xmaxqdepth=new_queue_depth b. For the 2.6 kernel (SUSE Linux Enterprise Server 9, or later, or Red Hat Enterprise Linux 4, or later): options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth 2. Rebuild the RAM disk that is associated with the kernel being used by using one of the following commands: a. If you are running on an SUSE Linux Enterprise Server operating system, run the mk_initrd command. b. If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd command and then restart.

5.11.5 Multipathing in Linux


Red Hat Enterprise Linux 5 and later, and SUSE Linux Enterprise Server 10 and later, provide their own multipath support by the operating system. On older systems, it is necessary to install the IBM SDD multipath driver.

Installing SDD
This section describes how to install SDD for older distributions. Before performing these steps, always check for the currently supported levels, as described in 5.11.2, Configuration information on page 224.

Chapter 5. Host configuration

225

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

The cat /proc/scsi/scsi command in Example 5-52 shows the devices that the SCSI driver has probed. In our configuration, we have two HBAs installed in our server and we configured the zoning in order to access our VDisk from four paths.
Example 5-52 cat /proc/scsi/scsi command example

[root@diomede sdd]# cat /proc/scsi/scsi Attached devices: Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown [root@diomede sdd]#

Rev: 0000 ANSI SCSI revision: 04 Rev: 0000 ANSI SCSI revision: 04

The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown in Example 5-53.
Example 5-53 rpm command example

[root@Palau sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm Preparing... ########################################### [100%] 1:IBMsdd ########################################### [100%] Added following line to /etc/inittab: srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 [root@Palau sdd]# To manually load and configure SDD on Linux, use the service sdd start command (SUSE Linux users can use the sdd start command). If you are not running a supported kernel, you will get an error message. If your kernel is supported, you should see an OK success message, as shown in Example 5-54.
Example 5-54 Non-supported kernel for SDD

[root@Palau sdd]# sdd start Starting IBMsdd driver load: [ Issuing killall sddsrv to trigger respawn... Starting IBMsdd configuration: [ OK OK ] ]

Issue the cfgvpath query command to view the name and serial number of the VDisk configured in the SAN Volume Controller, as shown in Example 5-55.
Example 5-55 cfgvpath query example

[root@Palau ~]# cfgvpath query RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sda df_ctlr=0 /dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30

226

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

RTPG succeeded: sd_name=/dev/sdb df_ctlr=0 /dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdc df_ctlr=0 /dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdd df_ctlr=0 /dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 [root@Palau ~]# The cfgvpath command configures the SDD vpath devices, as shown in Example 5-56.
Example 5-56 cfgvpath command example

[root@Palau ~]# cfgvpath c--------- 1 root root 253, 0 Jun 5 WARNING: vpatha path sda has WARNING: vpatha path sdb has WARNING: vpatha path sdc has WARNING: vpatha path sdd has Writing out new configuration to file [root@Palau ~]#

09:04 /dev/IBMsdd already been configured. already been configured. already been configured. already been configured. /etc/vpath.conf

The configuration information is saved by default in the file /etc/vpath.conf. You can save the configuration information to a specified file name by entering the following command: cfgvpath -f file_name.cfg Issue the chkconfig command to enable SDD to run at system startup: chkconfig sdd on To verify the setting, enter the following command: chkconfig --list sdd This is shown in Example 5-57.
Example 5-57 sdd run level example

[root@Palau sdd]# chkconfig --list sdd sdd 0:off 1:off 2:on [root@Palau sdd]#

3:on

4:on

5:on

6:off

If necessary, you can disable the startup option by entering: chkconfig sdd off

Chapter 5. Host configuration

227

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Run the datapath query commands to display the online adapters and paths to the adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and 2. Paths 1 and 3 connect to the other node and are used as alternate or backup paths for high availability, as shown in Example 5-58.
Example 5-58 datapath query command example

[root@Palau ~]# datapath query adapter Active Adapters :2 Adpt# Name State Mode 0 Host0Channel0 NORMAL ACTIVE 1 Host1Channel0 NORMAL ACTIVE [root@Palau ~]# [root@Palau ~]# datapath query device Total Devices : 1 Select 1 0 Errors 0 0 Paths 2 2 Active 0 0

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda CLOSE NORMAL 1 0 1 Host0Channel0/sdb CLOSE NORMAL 0 0 2 Host1Channel0/sdc CLOSE NORMAL 0 0 3 Host1Channel0/sdd CLOSE NORMAL 0 0 [root@Palau ~]# SDD has three different path-selection policy algorithms. Failover only (fo): All I/O operations for the device are sent to the same (preferred) path unless the path fails because of I/O errors. Then an alternate path is chosen for subsequent I/O operations. Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at random from those paths. Load-balancing mode also incorporates failover protection. The load-balancing policy is also known as the optimized policy. Round robin (rr): The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two. You can dynamically change the SDD path-selection policy algorithm by using the SDD command datapath set device policy. You can see the SDD path-selection policy algorithm that is active on the device when you use the datapath query device command. Example 5-58 shows that the active policy is optimized, which means that the SDD path-selection policy algorithm active is Optimized Sequential.

228

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Example 5-59 shows the VDisk information from the SVC command-line interface.
Example 5-59 svcinfo redhat1

IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2 id 6 name linux2 port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 2 state active WWPN 210000E08B054CAA node_logged_in_count 2 state active IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2 id name SCSI_id vdisk_id wwpn vdisk_UID 6 linux2 0 33 210000E08B89C1CD 60050768018201BEE000000000000035 IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1 id 33 name linux_vd1 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG0 capacity 1.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018201BEE000000000000035 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 IBM_2145:ITSOSVC42A:admin>

vdisk_name linux_vd1

Chapter 5. Host configuration

229

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.11.6 Creating and preparing SDD volumes for use


Follow these steps to create and prepare the volumes: 1. Create a partition on the vpath device, as shown in Example 5-60.
Example 5-60 fdisk example

[root@Palau ~]# fdisk /dev/vpatha Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-1011, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): Using default value 1011 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@Palau ~]#

230

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

2. Create a file system on the vpath, as shown in Example 5-61.


Example 5-61 mkfs command example

[root@Palau ~]# mkfs -t ext3 /dev/vpatha mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Palau ~]# 3. Create the mount point and mount the vpath drive, as shown in Example 5-62.
Example 5-62 Mount point

[root@Palau ~]# mkdir /itsosvc [root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc 4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc and the datapath query command shows that four paths are available (Example 5-63).
Example 5-63 Display mounted drives

[root@Palau ~]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 74699952 /dev/hda1 101086 none 1033136 /dev/vpatha 1032088 [root@Palau ~]#

Used Available Use% Mounted on 2564388 13472 0 34092 68341032 82395 1033136 945568 4% 15% 0% 4% / /boot /dev/shm /itsosvc

[root@Palau ~]# datapath query device Total Devices : 1

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================

Chapter 5. Host configuration

231

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Path# Adapter/Hard Disk 0 Host0Channel0/sda 1 Host0Channel0/sdb 2 Host1Channel0/sdc 3 Host1Channel0/sdd [root@Palau ~]#

State OPEN OPEN OPEN OPEN

Mode NORMAL NORMAL NORMAL NORMAL

Select 1 6296 6178 0

Errors 0 0 0 0

5.11.7 Using the operating system MPIO


As mentioned before, Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide their own multipath support for the operating system. This means you do not have to install an additional device driver. Always check if your operating system includes one of the supported multipath drivers. You will find this information in the links provided in 5.11.2, Configuration information on page 224. In SLES10, the multipath drivers and tools are installed by default, but for RHEL5, the user has to explicitly choose the multipath components during the OS installation to install them. Each of the attached SAN Volume Controller LUNs has a special device file in the Linux directory /dev. Hosts that use 2.6 kernel Linux operating systems can have as many Fibre Channel disks as r allowed by the SAN Volume Controller. The following Web site provides the most current information about the maximum configuration for the SAN Volume Controller: http://www.ibm.com/storage/support/2145

5.11.8 Creating and preparing MPIO volumes for use


First, you have to start the MPIO daemon on your system. To do this, run the following commands on your host system: Enable MPIO for SLES10 by running the following commands: 1. /etc/init.d/boot.multipath {start|stop} 2. /etc/init.d/multipathd {start|stop|status|try-restart|restart|force-reload|reload|probe} Note: Run insserv boot.multipath multipathd to automatically load the multipath driver and multipathd daemon during boot up. Enable MPIO for RHEL5 by running the following commands: 1. modprobe dm-multipath 2. modprobe dm-round-robin 3. service multipathd start 4. chkconfig multipathd on Example 5-64 on page 233 shows the commands issued on a RHEL 5.1 operating system

232

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Example 5-64 Starting MPIO daemon on RHEL

[root@palau [root@palau [root@palau [root@palau

~]# modprobe dm-round-robin ~]# multipathd start ~]# chkconfig multipathd on ~]#

5. Open the multipath.conf file and follow the instructions to enable multipathing for IBM devices. The file is located in the /etc directory. Example 5-65 shows editing using vi.
Example 5-65 Editing the multipath.conf file

[root@palau etc]# vi multipath.conf 6. Add the following entry to the multipath.conf file: device { vendor "IBM" product "2145" path_grouping_policy group_by_prio prio_callout "/sbin/mpath_prio_alua /dev/%n" } 7. Restart the multipath daemon (Example 5-66).
Example 5-66 Stopping and starting the multipath daemon

[root@palau ~]# service multipathd stop Stopping multipathd daemon: [root@palau ~]# service multipathd start Starting multipathd daemon:

[ [

OK OK

] ]

8. Type the command multipath -dl to see the mpio configuration. You should see two groups with two paths each. All paths should have the state [active][ready] and one group should be [enabled].

Chapter 5. Host configuration

233

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

9. Use fdisk to create a partition on the SVC disk, as shown in Example 5-67.
Example 5-67 fdisk

[root@palau scsi]# fdisk -l Disk /dev/hda: 80.0 GB, 80032038912 bytes 255 heads, 63 sectors/track, 9730 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/hda1 * /dev/hda2 Start 1 14 End 13 9730 Blocks 104391 78051802+ Id 83 8e System Linux Linux LVM

Disk /dev/sda: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sde doesn't contain a valid partition table Disk /dev/sdf: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdf doesn't contain a valid partition table Disk /dev/sdg: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdg doesn't contain a valid partition table

234

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

Disk /dev/sdh: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdh doesn't contain a valid partition table Disk /dev/dm-2: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-3 doesn't contain a valid partition table [root@palau scsi]# fdisk /dev/dm-2 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-516, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-516, default 516): Using default value 516 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. [root@palau scsi]# shutdown -r now

Chapter 5. Host configuration

235

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

10.Create a file system using the mkfs command (Example 5-68).


Example 5-68 mkfs command

[root@palau ~]# mkfs -t ext3 /dev/dm-2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 518144 inodes, 1036288 blocks 51814 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1061158912 32 block groups 32768 blocks per group, 32768 fragments per group 16192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@palau ~]# 11.Create a mount point and mount the drive, as shown in Example 5-69.
Example 5-69 Mount point

[root@palau ~]# mkdir /svcdisk_0 [root@palau ~]# cd /svcdisk_0/ [root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0 [root@palau svcdisk_0]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73608360 1970000 67838912 3% / /dev/hda1 101086 15082 80785 16% /boot tmpfs 967984 0 967984 0% /dev/shm /dev/dm-2 4080064 73696 3799112 2% /svcdisk_0

236

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

5.12 VMware configuration information


This section explains the requirements and additional information for attaching the SAN Volume Controller to a variety of guest host operating systems running on the VMware operating system.

5.12.1 Configuring VMware hosts


To configure the VMware hosts, follow these steps: 1. Install the HBAs into your host system, as described in 5.12.4, HBAs for hosts running VMware on page 237 2. Connect the server FC Host adapters to the switches. 3. Configure the switches (zoning), as described in 5.12.6, VMware storage and zoning recommendations on page 239. 4. Install the VMware operating system (if not already done) and check the HBA timeouts, as described in 5.12.7, Setting the HBA timeout for failover in VMware on page 240. 5. Configure the host, VDisks, and host mapping in the SVC, as described in 5.12.9, Attaching VMware to VDisks on page 241.

5.12.2 Operating system versions and maintenance levels


For the latest information about VMware support, refer to: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html At the time of writing, the following versions are supported: ESX V3.5 ESX V3.51 ESX V3.02 ESX V2.5.3 ESX V2.5.2 ESX V2.1 with VMFS disks Note: Customers who are running the VMware V3.01 build are required to move to a minimum VMware level of V3.02 for continued support.

5.12.3 Guest operating systems


Also make sure that you are using supported guest operating systems. The latest information is available at: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_VMWare

5.12.4 HBAs for hosts running VMware


Ensure that your hosts running on VMware operating systems use the correct host bus adapters (HBAs) and firmware levels. Install the host adapter(s) into your system. Refer to the manufacturers instructions for installation and configuration of the HBAs.
Chapter 5. Host configuration

237

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

In IBM System x servers, the HBA should always be installed in the first slots. This means that if you install, for example, two HBAs and two network cards, the HBAs should be installed in slot 1 and slot 2 and the network cards can be installed in the remaining slots. For older ESX versions, you will find the supported HBAs at the IBM Web Site: http://www.ibm.com/storage/support/2145 The interoperability matrix for ESX V3.02, V3.5, and V3.51 are available at the VMware Web Site (clicking this link opens or downloads the PDF): V3.02 http://www.vmware.com/pdf/vi3_io_guide.pdf V3.5 http://www.vmware.com/pdf/vi35_io_guide.pdf The supported HBA device drivers are already included in the ESX server build. After installing, load the default configuration of your FC HBAs. We recommend using the same model of HBA with the same firmware in one server. It is not supported to have Emulex and QLogic HBAs that access the same target in one server.

5.12.5 Multipath solutions supported


Only single path is supported in ESX V2.1 and multipathing is supported in ESX V2.5.x. The VMware operating system provides multipathing support, so installing multipathing software is not required.

VMware multipathing software dynamic pathing


VMware multipathing software does not support dynamic pathing. Preferred paths set in the SAN Volume Controller are ignored. The VMware multipathing software performs static load balancing for I/O, based upon a host setting that defines the preferred path for a given volume.

Multipathing configuration maximums


When you configure, keep in mind the maximum configuration for the VMware multipathing software: 256 is the maximum number of SCSI devices supported by the VMware software, and the maximum number of paths to each VDisk is four, giving you a total number of paths, on a server, of 1024. Note: Each path to a VDisk equates to a single SCSI device.

Clustering support for hosts running VMware


The SVC provides cluster support on VMware guest operating systems. The following Web Site provides the current interoperability information: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_VMware

SAN boot support


SAN boot of any guest OS is supported under VMware. The very nature of VMware means that this is a requirement on any guest OS. The guest OS itself must reside on a SAN disk.

238

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

If you are not familiar with the VMware environments and the advantages of storing virtual machines and application data on a SAN, we recommend that you get an overview about the VMware products before continuing with the section below. VMware documentation is available at: http://www.vmware.com/support/pubs/

5.12.6 VMware storage and zoning recommendations


The VMware ESX server is able to use a Virtual Machine File System (VMFS). This is a file system that is optimized to run multiple virtual machines as one workload to minimize disk I/O. It is also able to handle concurrent access from multiple physical machines because it enforces the appropriate access controls. This means multiple ESX hosts can share the same set of LUNs (Figure 5-46).

Figure 5-46 VMware - SVC zoning example

This means that theoretically you are able to run all your virtual machines on one LUN, but for performance reasons, in more complex scenarios, it can be better to load balance virtual machines over separate HBAs, storages, or arrays. For example, if you run an ESX host, with several virtual machines, it would make sense to use one slow array, for example, for Print and Active Directory Services guest operating systems without high I/O, and another fast array for database guest operating systems.

Chapter 5. Host configuration

239

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Using fewer VDisks does have the following advantages: More flexibility to create virtual machines without creating new space on the SVC More possibilities for taking VMware snapshots Fewer VDisks to manage Using more and smaller VDisks can have the following advantages: Different I/O characteristics of the guest operating systems More flexibility (the multipathing policy and disk shares are set per VDisk) Microsoft Cluster Service requires a own VDisk for each cluster disk resource More documentation about designing your VMware infrastructure is provided at: http://www.vmware.com/vmtn/resources/ or: http://www.vmware.com/resources/techresources/1059 Note: ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone. You can have only one VMFS volume per VDisk.

5.12.7 Setting the HBA timeout for failover in VMware


The timeout for failover for ESX hosts should be set to 30 seconds. For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The timeout value is 2 * PortDownRetryCount + 5 sec. It is recommended to set the qlport_down_retry parameter to 14. For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters should be set to 30 seconds. To make these changes on your system, perform the following steps (Example 5-70): 1. Back up the file /etc/vmware/esx.cof. 2. Open /etc/vmware/esx.cof for editing. 3. The file includes a section for every installed SCSI device. 4. Locate your SCSI adapters and edit the parameters described above. 5. Repeat this for every installed HBA.
Example 5-70 Setting HBA Timeout

[root@nile svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup [root@nile svc]# vi /etc/vmware/esx.conf

240

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

5.12.8 Multipathing in ESX


The ESX Server performs multipathing itself, and you do not need to install a multipathing driver such as SDD, either on the ESX server or on the guest operating systems.

5.12.9 Attaching VMware to VDisks


First we make sure that the VMware host is logged into the SAN Volume Controller. In our examples, VMware ESX server V3.5 and the host name Nile is used. Enter the following command to check the status of the host: svcinfo lshost <hostname> Example 5-71 shows that the host Nile is logged into the SVC with two HBAs.
Example 5-71 lshost Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 2 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active Then we have to set the SCSI Controller Type in VMware. By default, ESX Server disables the SCSI bus sharing, and does not allow multiple virtual machines to access the same VMFS file at the same time (Figure 5-47 on page 242). But in many configurations, such as those for high availability, the virtual machines have to share the same VMFS file to share a disk. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select Edit settings. Highlight the SCSI Controller, and select one of the three available settings, depending on your configuration: None: Disks cannot be shared by other virtual machines. Virtual: Disks can be shared by virtual machines on the same server. Physical: Disks can be shared by virtual machines on any server. Click OK to apply the setting.

Chapter 5. Host configuration

241

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 5-47 Changing SCSI Bus settings

Create your VDisks on the SVC and map them to the ESX hosts Note: If you want to use features such as VMotion, the VDisks that own the VMFS file have to be visible to every ESX host that should be able to host the virtual machine. In SVC, this can be achieved by selecting the Allow the virtual disks to be mapped even if they are already mapped to a host check box. The VDisk has to have the same SCSI ID on each ESX host. For this example configuration, we have created one VDisk and have mapped it to our ESX host, as shown in Example 5-72.
Example 5-72 Mapped VDisk to ESX host Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile id name SCSI_id vdisk_id vdisk_name wwpn 1 Nile 0 12 VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010

vdisk_UID

ESX does not automatically scan for SAN changes (except when rebooting the whole ESX server). If you have made any changes to your SVC or SAN configuration, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host. 3. In the Hardware window, choose Storage Adapters. 4. Click Rescan.

242

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

To configure a storage device to use it in VMware, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host for which you want to see the assigned VDisks and open the Configuration tab. 3. In the Hardware window on the left side, click Storage. 4. To create a new storage pool, select Click here to create a datastore or Add storage if the yellow field does not appear (Figure 5-48).

Figure 5-48 VMWare Add Datastore

5. The Add storage wizard will appear. 6. Select create Disk/Lun and click Next. 7. Select the SVC VDisk you want to use for the datastore and click Next. 8. Review the Disk Layout and click Next. 9. Enter a datastore name and click Next. 10.Select a Block Size and enter the size of the new partition, then click Next. 11.Review your selections and click Finish. Now the created VMFS datastore appears in the Storage window (Figure 5-49). You will see the details for the highlighted datastore. Check if all the paths are available, and that the Path Selection is set to Most Recently Used.

Figure 5-49 VMWare Storage Configuration

If not all paths are available, check your SAN and storage configuration. After fixing the problem, select Refresh to perform a path rescan. The view will be updated to the new configuration.

Chapter 5. Host configuration

243

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

The recommended Multipath Policy for SVC is Most Recently Used. If you have to edit this policy, perform the following steps: 1. Highlight the datastore. 2. Click Properties. 3. Click Managed Paths. 4. Click Change (see Figure 5-50). 5. Select Most Recently Used. 6. Click OK. 7. Click Close. Now your VMFS datastore has been created and you can start using it for your guest operating systems.

5.12.10 VDisk naming in VMware


In the Virtual Infrastructure Client, a VDisk is displayed as a sequence of three or four numbers, separated by colons (Figure 5-50): <SCSI HBA>:<SCSI target>:<SCSi VDIsk>:<disk partition> Where SCSI HBA The number of the SCSI HBA (may change). SCSI target The number of the SCSI target (may change). SCSI VDisk The number of the VDisk (never changes). disk partition The number of the disk partition (never changes). If the last number is not displayed, the name stands for the entire VDisk.

Figure 5-50 VDisk naming in VMware

244

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

5.12.11 Setting the Microsoft guest operating system timeout


For a Microsoft Windows 2000 or 2003 Server installed as a VMware guest operating system, the disk timeout value should be set to 60 seconds. The instructions to perform this task are provided in 5.6.5, Changing the disk timeout on Microsoft Windows Server on page 186.

5.12.12 Extending a VMFS volume


It is possible to extend VMFS volumes while virtual machines are running. First, you have to extend the VDisk on the SVC, and then you are able to extend the VMFS volume. Before performing these steps, we recommend having a backup of your data. Perform the following steps to extend a volume: 1. The VDisk can be expanded with the svctask expandvdisksize -size 1 -unit gb <Vdiskname> command (Example 5-73).
Example 5-73 Expanding a VDisk in SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 60.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name
Chapter 5. Host configuration

245

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

fast_write_state empty used_capacity 60.00GB real_capacity 60.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 65.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 65.00GB real_capacity 65.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>

246

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage Adapters. 6. Click Rescan. 7. Make sure that the Scan for new Storage Devices check box is marked and click OK. After the scan has completed, the new capacity is displayed in the Details section. 8. Click Storage. 9. Right-click the VMFS volume and click Properties. 10.Click Add Extend. 11.Select the new free space and click Next. 12.Click Next. 13.Click Finish. The VMFS volume has now been extended and the new space is ready for use.

5.12.13 Removing a datastore from an ESX host


Before you remove a datastore from an ESX host, you have to migrate or delete all virtual machines that reside on this datastore. To remove it, perform the following steps: 1. Back up the data. 2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage. 6. Highlight the datastore you want to remove. 7. Click Remove. 8. Read the warning, and if you are sure that you want to remove the datastore and delete all data on it, click Yes. 9. Remove the host mapping on the SVC or delete the VDisk (as shown in Example 5-74). 10.In the VI Client, select Storage Adapters. 11.Click Rescan. 12.Make sure that the Scan for new Storage Devices check box is marked and click OK. 13.After the scan completes, the disk disappears from the view. Your datastore has now been successfully removed from the system.
Example 5-74 Remove VDisk host mapping - Delete VDisk

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_pool IBM_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool

Chapter 5. Host configuration

247

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.13 SUN Solaris support information


For the latest information about supported software and driver levels, always refer to this site: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

5.13.1 Operating system versions and maintenance levels


At the time of writing, Sun Solaris 8, Sun Solaris 9, and Sun Solaris 10 are supported in 64-bit only.

5.13.2 SDD dynamic pathing


Solaris supports dynamic pathing when you either add more paths to an existing VDisk, or if you present a new VDisk to a host. No user intervention is required. SDD is aware of the preferred paths that SVC sets per VDisk. SDD will use a round robin algorithm when failing over paths, that is, it will try the next known preferred path. If this fails and all preferred paths have been tried, it will use a round robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the VDisk will go offline. Therefore, it can take some time to perform path failover when multiple paths go offline. SDD under Solaris performs load balancing across the preferred paths where appropriate.

Veritas Volume Manager with DMP Dynamic Pathing


Veritas VM with DMP automatically selects the next available I/O path for I/O requests dynamically without action from the administrator. VM with DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly). The new JNI drivers support the mapping of new VDisks without rebooting the Solaris host. Note the following support characteristics: Veritas VM with DMP does not support preferred pathing with SVC. Veritas VM with DMP does support load balancing across multiple paths with SVC.

Co-existence with SDD and Veritas VM with DMP


Veritas Volume Manager with DMP will coexist in pass-thru mode with SDD. This means that DMP will use the vpath devices provided by SDD.

OS Cluster Support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/ 5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of writing.

SAN Boot support


Note the following support characteristics: Boot from SAN is supported under Solaris 9 running Symantec Volume Manager. Boot from SAN is not supported when SDD is used as the multi-pathing software.

248

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

5.14 HP-UX configuration information


For the latest information about HP-UX support, refer to: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

5.14.1 Operating system versions and maintenance levels


At the time of writing, HP-UX V11.0 and V11i v1/ v2 / v3 is supported (64-bit only).

5.14.2 Multipath solutions supported


At the time of writing, SDD V1.6.3.0 for HP-UX is supported. Multipathing Software PV Link and Cluster Software Service Guard V11.14 / 11.16 / 11.17 / 11.18 are also supported, but in a cluster environment SDD is recommended.

SDD dynamic pathing


HP-UX supports dynamic pathing when you either add more paths to an existing VDisk or if you present a new VDisk to a host. SDD is aware of the preferred paths that SVC sets per VDisk. SDD will use a round robin algorithm when failing over paths, that is, it will try the next known preferred path. If this fails and all preferred paths have been tried, it will use a round robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the VDisk will go offline. It can take some time, therefore, to perform path failover when multiple paths go offline. SDD under HP-UX performs load balancing across the preferred paths where appropriate.

Physical Volume Links (PVLinks) Dynamic Pathing


Unlike SDD, PVLinks does not load balance and is unaware of the preferred paths that SVC sets per VDisk. Therefore, SDD is strongly recommended, except when in a clustering environment or when using an SVC VDisk as your boot disk. When creating a Volume Group, specify the primary path you want HP-UX to use when accessing the Physical Volume presented by SVC. This path, and only this path, will be used to access the PV as long as it is available, no matter what the SVC's preferred path to that VDisk is. Therefore, care needs to be taken when creating Volume Groups so that the primary links to the PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on. When extending a Volume Group to add alternate paths to the PVs, the order in which you add these paths is HP-UX's order of preference should the primary path become unavailable. Therefore, when extending a Volume Group, the first alternate path you add should be from the same SVC node as the primary path, to avoid unnecessary node failover due to an HBA, FC link, or FC switch failure.

5.14.3 Co-existence of SDD and PV Links


If you want to multipath a VDisk with PVLinks while SDD is installed, you need to make sure SDD does not configure a vpath for that VDisk. To do this, you need to put the serial number of any VDisks you want SDD to ignore in /etc/vpathmanualexcl.cfg. In the case of SAN Boot, if you are booting from an SVC VDisk, when you install SDD (from Version 1.6 onwards), SDD will automatically ignore the boot VDisk.

Chapter 5. Host configuration

249

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

SAN Boot support


SAN Boot is supported on HP-UX by using PVLinks as the multi-pathing software on the boot device. PVLinks or SDD can be used to provide the multi-pathing support for the other devices attached to the system.

5.14.4 Using an SVC VDisk as a cluster lock disk


ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When using an SVC VDisk as your lock disk, should the path to FIRST_CLUSTER_LOCK_PV become unavailable, the HP node will not be able to access the lock disk should a 50-50 split in quorum occur. To ensure redundancy, when editing you Cluster Configuration ASCII file, make sure that the variable FIRST_CLUSTER_LOCK_PV is a different path to the lock disk for each HP node in your cluster. For example, when configuring a two node HP cluster, make sure that FIRST_CLUSTER_LOCK_PV on HP server A is on a different SVC node and through a different FC switch to the FIRST_CLUSTER_LOCK_PV on HP server B.

5.14.5 Support for HP-UX greater than eight LUNs


HPUX will not recognize more than eight LUNS per port using the generic SCSI behavior. In order to accommodate this behavior, SVC supports a type associated with a host. This can be set using the command svctask mkhost and modified using the command svctask chhost. The type can be set to generic, which is the default or HPUX. When an initiator port that is a member of a host that is of type HPUX accesses an SVC, the SVC will behave in the following way:

Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an Inquiry command for any page is sent to LUN 0 using Peripheral Device Addressing, it is reported as Peripheral Device Type 0Ch (controller). When any command other than an inquiry is sent to LUN 0 using Peripheral Device Addressing, SVC will respond as an unmapped LUN 0 would normally respond. When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown Device Type. When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh (unknown or no device type). This is in contrast to the behavior for generic hosts, where peripheral Device Type 00h is returned.

5.15 Using SDDDSM, SDDPCM, and SDD Web interface


After installing the SDDDSM or SDD driver there are specific commands available. To open a command window for SDDDSM or SDD, from the desktop, select Start Programs Subsystem Device Driver Subsystem Device Driver Management. The command documentation for the different operating systems is available in the Multipath Subsystem Device Driver User Guides: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 000303&loc=en_US&cs=utf-8&lang=en 250
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch05.Host Configuration Werner.fm

It is also possible to configure the multipath driver so that it offers a Web interface to run the commands. Before this can work, we need to configure the Web interface. Sddsrv does not bind to any TCP/IP port by default, but allows port binding to be dynamically enabled or disabled. For all platforms except Linux, the multipath driver package ships a template file of sddsrv.conf that is named sample_sddsrv.conf. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the sample_sddsrv.conf file is in the directory where SDD is installed. You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as sample_sddsrv.conf by simply copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying the parameters in sddsrv.conf. and changing the values of Enableport,Loopbackbind to True. Figure 5-51 shows the start window of the multipath driver Web interface.

Figure 5-51 SDD Web interface

5.16 Calculating the queue depth


The queue depth is the number of I/O operations that can be run in parallel on a device. It is usually possible to set a limit on the queue depth on the subsystem device driver (SDD) paths (or equivalent) or the host bus adapter (HBA). Ensure that you configure the servers to limit the queue depth on all of the paths to the SAN Volume Controller disks in configurations that contain a large number of servers or virtual disks (VDisks). You might have a number of servers in the configuration that are idle, or do not initiate the calculated quantity of I/O operations. If so, you might not need to limit the queue depth.

Chapter 5. Host configuration

251

6423ch05.Host Configuration Werner.fm

Draft Document for Review January 12, 2010 4:41 pm

5.17 Further sources of information


For more information about host attachment and configuration to the SVC, refer to the IBM System Storage SAN Volume Controller: Host Attachment Guide, SC26-7563. For more information about SDDDSM or SDD configuration, refer to the IBM TotalStorage Multipath Subsystem Device Driver User's Guide, SC30-4096. When looking for information about special information about certain storage subsystems this link is usually very helpful for overall information about different types of subsystems. http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp

5.17.1 IBM Redbook publications containing SVC storage subsystem attachment guidelines
It is beyond the intended scope of this redbook to describe the attachment to each and every subsystem that the SVC supports. Here is a short list of what we found especially useful in the writing of this redbook, and in the field: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, In this book it is described in details how you can tune your back end storage to maximize your performance on the SVC. http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf In Chapter 14 in DS8000 Performance Monitoring and Tuning, SG24-7146, This chapter describes the guidelines and procedures to make the most of the performance available from your DS8000 storage subsystem when attached to the IBM SAN Volume Controller.http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf In the DS4000 Best Practices and Performance Tuning Guide, SG24-6363 it is defined in details how to connect and configure your storage for optimized performance on the SVC. http://www.redbooks.ibm.com/redbooks/pdfs/sg2476363.pdf In IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659 it discusses specific considerations for attaching the XIV Storage System to a SAN Volume Controller http://www.redbooks.ibm.com/redpieces/pdfs/sg247659.pdf

252

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Chapter 6.

Advanced Copy Services


Before we introduce the means to action FlashCopy, Metro Mirror, and Global Mirror, we will first describe the SVC Advanced Copy Services (ACS) of: FlashCopy Metro Mirror Global Mirror In Chapter 7, SVC operations using the CLI on page 337 we describe how to use the CLI to action ACS. In Chapter 8, SVC operations using the GUI on page 469 we describe how to use the GUI to action ACS.

Copyright IBM Corp. 2009. All rights reserved.

253

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides the capability to perform a point-in-time (PiT) copy of one or more VDisks. In the topics that follow, we describe how FlashCopy works on the SVC, and we present examples of how to configure and utilize FlashCopy. FlashCopy is also known as point-in-time copy (PiT). This technique is used to help solve the problem where it is difficult to make a consistent copy of a data set that is being constantly updated. The FlashCopy source is frozen for a few seconds or less during the PiT copy process. It will be able to accept I/O when the PiT copy bitmap is set up and the FlashCopy function is ready to intercept read/write requests in the IO path. Although the background copy operation takes some time, the resulting data at the target appears as though the copy were made instantaneously. SVCs FlashCopy service provides the capability to perform a PiT copy of one or more VDisks. Since it is performed at the block level it operates underneath the operating system and application caches. The image that is presented is 'crash-consistent': that is to say it is similar to one that would be seen in a crash event, such as an unexpected power failure.

6.1.1 Business requirement


The business applications for FlashCopy are many and various. An important use is facilitating consistent backups of constantly changing data, and in these instances a FlashCopy is created to capture a PiT copy. The resulting image can be backed up to tertiary storage such as tape. After the copied data is on tape, the FlashCopy target is redundant. Different tasks can benefit from the use of FlashCopy. In the following sections, we describe the most common situations.

6.1.2 Moving and migrating data


When you need to move a consistent data set from one host to another, FlashCopy can facilitate this action with a minimum of downtime for the host application dependent on the source VDisk. It might be beneficial to quiesce the application on the host and flush the application and OS buffers so that the new VDisk contains data that is "clean" to the application. Though without this step, the newly created VDisk data will still be usable by the application, it will require recovery procedures (such as log replay) to use. Quiescing the application ensures that the startup time against the mirrored copy is minimised. The cache on the SVC is also flushed using the FlashCopy prestartfcmap command; see Preparing on page 272 prior to performing the FlashCopy. The created data set on the FlashCopy target is immediately available as well as the source VDisk.

6.1.3 Backup
FlashCopy does not affect your backup time, but it allows you to create a PiT consistent data set (across VDisks), with a minimum of downtime for your source host. The FlashCopy target can then be mounted on a different host (or the backup server) and backed up. Using this

254

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

procedure, the backup speed becomes less important, since the backup time does not require downtime for the host dependent on the source VDisks.

6.1.4 Restore
You can keep periodically created FlashCopy targets online, to provide very fast restore of specific files from the PiT consistent data set revealed on the FlashCopy targets, which simply can be copied to the source VDisk in case a restore is needed.

6.1.5 Application testing


You can test new applications and new operating system releases against a FlashCopy of your production data. The risk of data corruption is eliminated, and your application does not need to be taken offline for an extended period of time to perform the copy of the data. Data mining is a good example of an area where FlashCopy can help you. Data mining can now extract data without affecting your application.

6.1.6 SVC FlashCopy features


The FlashCopy function in SVC supports these features: The target is the time-zero copy of the source (known as FlashCopy mapping targets). The source VDisk and target VDisk are available (almost) immediately. One source VDisk can have up to 256 target VDisks at the same or different PiTs. Consistency groups are supported to enable FlashCopy across multiple VDisks. The target VDisk can be updated independently of the source VDisk. Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of the SVC I/O group to prevent a single point of failure. FlashCopy mapping can be automatically withdrawn after the completion of background copy. FlashCopy consistency groups can be automatically withdrawn after the completion of background copy. Multi-Target FlashCopy: FlashCopy now supports up to 256 target copies from a single source VDisk. Space-efficient FlashCopy (SEFC): SEFC uses disk space only for changes between source and target data and not for the entire capacity of a virtual disk copy. FlashCopy licensing: The FlashCopy previously was licensed by the source and target virtual capacity. It will now be licensed only by source virtual capacity. Incremental FlashCopy a mapping created with the incremental flag copies only the data that has been changed on the source or the target since the previous copy completed. This can substantially reduce the time required to recreate an independent image. Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete.

Chapter 6. Advanced Copy Services

255

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Cascaded FlashCopy the target VDisk of a FlashCopy mapping can itself be the source VDisk in a further FlashCopy mapping

6.2 Reverse FlashCopy


With SVC version 5.1.x Reverse FlashCopy support is available. Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. It supports multiple targets and thus multiple rollback points. A key advantage of SVC Multi-Target Reverse FlashCopy function is that the reverse does not destroy the original target; thus anything using the target, such as a tape backup process, will not be disrupted; multiple different recovery points can be tested. SVC is also unique in that an optional copy of the source VDisk can be made before starting the reverse copy operation in order to diagnose problems When a user suffers a disaster and needs to restore from an on-disk backup, the procedure is as follows: (Optionally) Create a new target VDisk (VDisk Z) and FlashCopy the production VDisk (VDisk X) onto the new target for later problem analysis. Create a new FlashCopy map with the backup to be restored (VDisk Y) or (VDisk W) as the source VDisk and VDisk X as the target VDisk, if this does not already exist. Start the FlashCopy map (VDisk Y -> VDisk X) with the new -restore option to copy the backup data onto the production disk. The production disk will be instantly available with the backup data. Figure 6-1 shows an example of Reverse FlashCopy.

256

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Figure 6-1 Reverse FlashCopy

Whether or not the initial FlashCopy map (VDisk X -> VDisk Y) is incremental, the reverse operation will only copy modified data. Consistency groups are reversed by creating a set of new reverse FC maps and adding them to a new reverse consistency group. A consistency group cannot contain more than one FC map with the same target VDisk.

6.2.1 FlashCopy and TSM


The management of many large Reverse FlashCopy consistency groups is a complex task, without some tool for assistance. IBM Tivoli Storage FlashCopy Manager V2.1 is a new product that will improve the interlock between SVC and TSM for Advanced Copy Services (ACS) as well. Figure 6-2 shows the TSM for ACS features.

Chapter 6. Advanced Copy Services

257

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-2 TSM for ACS features

FlashCopy Manager (FCM) provides many of the features of TSM for Advanced Copy Services without the requirement to use TSM. With Tivoli FlashCopy Manager it will be possible to co-ordinate and automate host preparation steps before issuing FlashCopy start commands to ensure a consistent backup of the application is taken. Databases can be put into hot backup mode, and before starting FlashCopy you would flush the filesystem cache. FCM also allows for easier management of on-disk backups using FlashCopy, and provides a simple interface to the reverse operation. Figure 6-3 shows in brief the FCM feature.

258

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Figure 6-3 TSM FlashCopy Manager features

It is beyond the intended scope of this book to describe TSM FCM.

6.3 How FlashCopy works


FlashCopy works by defining a FlashCopy mapping that consists of one source VDisk together with one target VDisk. Multiple FlashCopy mappings can be defined and PiT consistency can be observed across multiple FlashCopy mappings using consistency groups; see Consistency group with MTFC on page 263. When FlashCopy is started, it makes a copy of a source VDisk to a target VDisk, and the original contents of the target VDisk are overwritten. When the FlashCopy operation is started, the target VDisk presents the contents of the source VDisk as they existed at the single PiT of FlashCopy starting. This is also referred to as a time-zero copy (T0 ). When a FlashCopy is started, the source and target VDisks are instantaneously available, because when it starts, bitmaps are created to govern and redirect I/O to the source or target VDisk, respectively, depending on where the requested block is present, while the blocks are copied in the background from the source to the target VDisk. For more details on background copy, see Grains and the FlashCopy bitmap on page 264. In Figure 6-4, the redirection of the host I/O towards source and target VDisk is explained.

Chapter 6. Advanced Copy Services

259

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-4 Redirection of host I/O

6.4 Implementation of SVC FlashCopy


In the topics that follow, we describe how FlashCopy is implemented in the SVC.

6.4.1 FlashCopy mappings


In the SVC, FlashCopy occurs between a source VDisk and a target VDisk. The source and target VDisks must be the same size. The minimum granularity that SVC supports for FlashCopy is an entire VDisk; this means it is not possible to FlashCopy only part of a VDisk. The source and target VDisks must both belong to the same SVC Cluster, but can be in different I/O groups within that Cluster. SVC FlashCopy associates a source VDisk and a target VDisk together in a FlashCopy mapping. VDisks which are members of a FlashCopy mapping cannot have their size increased or decreased while they are members of the FlashCopy mapping. The SVC supports the creation of enough FlashCopy mappings to allow every VDisk to be a member of a FlashCopy mapping. A FlashCopy mapping is the act of creating a relationship between a source VDisk and a target VDisk. FlashCopy mappings can be either stand-alone or a member of a consistency group. You can perform the act of preparing, starting, or stopping on either the stand-alone mapping or the consistency group. Note: Once a mapping is in a consistency group, you can only operate on the group and can no longer prepare, start, or stop the individual mapping. Figure 6-5 illustrates the concept of FlashCopy mapping.

260

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Figure 6-5 FlashCopy mapping

6.4.2 Multiple Target FlashCopy


SVC supports up to 256 target VDisks to be copied from a single source VDisk. Each copy is managed by a unique mapping. In general, each mapping acts independently and is not affected by the fact that other mappings share the same source VDisk. Figure 6-6 illustrates how these can be viewed.

Figure 6-6 Multiple Target FlashCopy implementation

Figure 6-6 shows four targets and mappings taken from a single source. It also shows that there is an ordering to the targets: Target 1 is the oldest (as measured from the time it was started), through to Target 4, which is the newest. The ordering is important because of the way in which data is copied when multiple target VDisks are defined and because of the dependency chain that results. A write to source VDisk does not cause its data to be copied to all the targets; instead, it is copied to the newest target VDisk only (Target 4 above). The older targets will refer to new targets first before referring to the source. From the point of view of an intermediate target disk (either the oldest or the newest), it treats the set of newer target VDisks and the true source VDisk as a type of composite source. It treats all older VDisks as a kind of target (and behaves like a source to them). If the mapping for an intermediate target VDisk shows 100% progress, then its target VDisk contains a complete set of data. In this case, mappings treat the set of newer target VDisks, up to and including the 100% progress target, as a form of composite source. A dependency relationship exists between a particular target and all newer targets (up to and including a target that shows 100% progress) that share the same source until all data has been copied to this target and all older targets. More information about Multiple Target FlashCopy (MTFC) can be found in 6.4.6, Interaction and dependency between MTFC on page 265.

Chapter 6. Advanced Copy Services

261

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.4.3 Consistency groups


Consistency groups address the issue where the objective is to preserve data consistency across multiple VDisks, because the applications have related data that span multiple VDisks. A requirement for preserving the integrity of data being written is to ensure that dependent writes are executed in the application's intended sequence. Because the SVC provides PiT semantics, a self consistent data set is obtained. FlashCopy mappings can be members of a consistency group, or can be operated standalone, not as part of a consistency group. FlashCopy commands can be issued to a FlashCopy consistency group, which affects all FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if it is not part of a defined FlashCopy consistency group. Figure 6-7 illustrates a consistency group consisting of two FlashCopy mappings.

Figure 6-7 FlashCopy consistency group

Dependent writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple VDisks, consider the following typical sequence of writes for a database update transaction: 1. A write is executed to update the database log, indicating that a database update is to be performed. 2. A second write is executed to update the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next. However, if the database log (updates 1 and 3) and the database itself (update 2) are on different VDisks and a FlashCopy mapping is started during this update, then you need to exclude the possibility that the database itself is copied slightly before the database log resulting in the target VDisks seeing writes (1) and (3) but not (2), since the database was copied before the write was completed. In this case, if the database was restarted using the backup made from the FlashCopy target disks, the database log would indicate that the transaction had completed successfully when, in fact, that is not the case, because the FlashCopy of the VDisk with the database file was started (bitmap was created) before the write was on the disk. Therefore, the transaction is lost and the integrity of the database is in question. 262
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

To overcome the issue of dependent writes across VDisks and create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an atomic operation. To achieve this condition, the SVC supports the concept of consistency groups. A FlashCopy consistency group can contain up to 512 FlashCopy mappings (up to the maximum number of FlashCopy mappings supported by the SVC Cluster). FlashCopy commands can then be issued to the FlashCopy consistency group and thereby simultaneously for all FlashCopy mappings defined in the consistency group. For example, when issuing a FlashCopy start command to the consistency group, all of the FlashCopy mappings in the consistency group are started at the same time, resulting in a PiT copy that is consistent across all of the FlashCopy mappings that are contained in the consistency group.

Consistency group with MTFC


It is important to note that a consistency group aggregates FlashCopy mappings, not VDisks. Thus, where a source VDisk has multiple FlashCopy mappings, they can be in the same or different consistency groups. If a particular VDisk is the source VDisk for multiple FlashCopy mappings, you may wish to create separate consistency groups to separate each mapping of the same source VDisk. If the source VDisk with multiple target VDisks is in the same consistency group, then the result will be that when the consistency group is started, multiple identical copies of the VDisk will be created. However, this might be what the user wants, for example, they may want to run multiple simulations on the same set of source data, and this would be one way of obtaining identical sets of source data.

Maximum configurations
Table 6-1 shows the FlashCopy properties and maximum configurations.
Table 6-1 FlashCopy properties and maximum configuration. FlashCopy property FC target per source FC mapping per cluster Maximum 256 4096 Comment The maximum number of FC mappings that can exist with the same source VDisk. The number of mappings is no longer limited by the number of VDisks in the cluster and so the FC component limit applies. An arbitrary limit policed by the software. This is a limit on the quantity of FlashCopy Mappings using bitmap space from this IO Group. This maximum configuration will consume all 512MB of bitmap space for the IO Group and allow no Metro and Global Mirror bitmap space. The default is 40TB. Due to the time taken to prepare a consistency group with a large number of mappings.

FC consistency group per cluster FC VDisk per I/O Group

127 1024

FC mappings per consistency groups

512

6.4.4 FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to both the source and target VDisks when a FlashCopy mapping is started, which is done using a FlashCopy bitmap. The purpose of the FlashCopy indirection layer is to enable both the source and target VDisks for read and write I/O immediately after the FlashCopy has been started.

Chapter 6. Advanced Copy Services

263

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

To illustrate how the FlashCopy indirection layer works, we look at what happens when a FlashCopy mapping is prepared and subsequently started. When a FlashCopy mapping is prepared and started, the following sequence is applied: 1. Flush write data in cache onto a source VDisk or VDisks that are part of a consistency group. 2. Put cache into write-through on the source VDisk(s). 3. Discard cache for the target VDisk(s). 4. Establish a sync point on all source VDisks in the consistency group (creating the FlashCopy bitmap). 5. Ensure that the indirection layer governs all I/O to source and target VDisks. 6. Enable cache on both the source and target VDisks. FlashCopy provides the semantics of a PiT copy, using the indirection layer, which intercepts I/Os targeted at either the source or target VDisks. The act of starting a FlashCopy mapping causes this indirection layer to become active in the I/O path. This occurs as an atomic command across all FlashCopy mappings in the consistency group. The indirection layer makes a decision about each I/O. This decision is based upon: The VDisk and logical block number (LBA) to which the I/O is addressed Its direction (read or write) The state of an internal data structure, the FlashCopy bitmap The indirection layer either allows the I/O to go through the underlying storage, redirects the I/O from the target VDisk to the source VDisk, or stalls the I/O while it arranges for data to be copied from the source VDisk to the target VDisk. To explain in more detail which action is applied for each I/O, we first look at the FlashCopy bitmap.

6.4.5 Grains and the FlashCopy bitmap


When data is copied between Virtual Disks by FlashCopy, either from Source to Target or from Target to Target, it is copied in units of address space known as Grains. The grain size is 256KB or 64 KB. The FlashCopy Bitmap contains one bit for each grain. The bit records whether the associated grain has yet been split by copying the grain from the source to the target.

Source Reads
Reads of the source are always passed through to the underlying source disk.

Target Reads
In order for FlashCopy to process a read from the target disk it must consult its bitmap. If the data being read has already been copied to the target then the read is sent to the target disk. If it has not then the read is sent to the source Virtual Disk or possibly to another target Virtual Disk if multiple FlashCopy mappings exist for the source Virtual Disk. Clearly this algorithm requires that, whilst this read is outstanding no writes are allowed to execute which would change the data being read. The SVC satisfies this requirement using by a cluster wide locking scheme.

Writes to the source or target


Where writes occur to source or target to an area (grain) which has not yet been copied, these will usually be stalled whilst a copy operation is performed to copy data from the source to the target, to maintain the illusion that the target contains its own copy. A specific optimization is performed where an entire grain is written to the target Virtual Disk. In this 264
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

case the new Grain contents are written to the target Virtual Disk and if this succeeds then the Grain is marked as split in the FlashCopy Bitmap without a copy from the source to the target having been performed. If the write fails then the grain is not marked as split. The rate at which the grains are copied across from the source VDisk to the target VDisk is called the copy rate. By default, the copy rate is 50, although this can be altered. For more information about copy rates, see 6.4.13, Space-efficient FlashCopy on page 274.

The FlashCopy indirection layer algorithm


Imagine the FlashCopy indirection layer as the I/O traffic cop when a FlashCopy mapping is active. The I/O is intercepted and handled according to whether it is directed at the source VDisk or the target VDisk, depending on the nature of the I/O (read or write) and the state of the grain (has it been copied or not). In Figure 6-8, we illustrate how the background copy runs while I/Os are handled according to the indirection layer algorithm.

Figure 6-8 I/O processing with FlashCopy

6.4.6 Interaction and dependency between MTFC


Figure 6-9 represents a set of four FlashCopy mappings that share a common source. The FlashCopy mappings will target VDisks Target 0, Target 1, Target 2, and Target 3.

Chapter 6. Advanced Copy Services

265

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-9 Interactions between MTFC mappings

Target 0 is not dependent on a source because it has completed copying. Target 0 has two dependent mappings (Target 1 and Target 2). Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been copied. Target 2 is dependent on it since Target 2 is 20% copy complete. Once all of Target 1 has been copied, it can then move to the idle_copied state. Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target 2 has been copied. No target is dependent on Target 2, so when all of it has been copied to Target 2, it can move to the idle_copied state. Target 3 has actually completed copying, so it is not dependent on any other maps.

Write to target VDisk


A write to an intermediate or newest target VDisk must consider the state of the grain within its own mapping as well as that of the grain of the next oldest mapping: If the grain of the next oldest mapping has not yet been copied, then it must be copied before the write is allowed to proceed in order to preserve the contents of the next oldest mapping. The data written to the next oldest mapping comes from a target or source. If the grain in the target being written has not yet been copied, then the grain is copied from the oldest already copied grain in the mappings that are newer than it, or the source if none are already copied. Once this copy has been done, the write can be applied to the target.

Read to target VDisk


If the grain being read has been split, then the read simply returns data from the target being read. If the read is to an uncopied grain on an intermediate target VDisk, then each of the newer mappings is examined in turn to see if the grain has been split. The read is surfaced from the first split grain found or from the source VDisk if none of the newer mappings has a split grain.

266

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Stopping copy process


An important scenario arises when a stop command is delivered to a mapping for a target that has dependent mappings. Once a mapping is in the stopped state, it can be deleted or restarted, and this must not be allowed if there are still grains that hold data that other mappings depend upon. To avoid this, when a mapping receives a stopfcmap or stopfcconsistgrp command, rather than immediately moving to the stopped state, it enters the stopping state. An automatic copy process is driven that will find and copy all data uniquely held on the target VDisk of the mapping that is being stopped, to the next oldest mapping that is in the copying state. Note: The stopping copy process can be ongoing for several mappings sharing the same source at the same time. At the completion of this process, the mapping will automatically make an asynchronous state transition to the stopped state or the idle_copied state if the mapping was in the copying state with progress = 100%. For example, if the mapping associated with Target 0 was issued a stopfcmap or stopfcconsistgrp command, then Target 0 would enter the stopping state while a process copied the data of Target 0 to Target 1. Once all the data has been copied, Target 0 would enter the stopped state, and Target 1 would no longer be dependent upon Target 0, but would remain dependent upon Target 2.

6.4.7 Summary of the FlashCopy indirection layer algorithm


Table 6-2 summarizes the indirection layer algorithm.
Table 6-2 Summary table of the FlashCopy indirection layer algorithm VDisk being accessed Source Has the grain been split (copied)? No Host I/O operation Read Read from source VDisk. Write Copy grain to most recently started target for this source, then write to the source. Write to source VDisk. Hold the write. Check dependency target VDisks to see if the grain is split. If the grain is not already copied to the next oldest target for this source, then copy the grain to the next oldest target. Then, write to the target. Write to target VDisk.

Yes Target No

Read from source VDisk. If any newer targets exist for this source in which this grain has already been copied, then read from the oldest of these. Otherwise, read from the source.

Yes

Read from target VDisk.

6.4.8 Interaction with the cache


This copy-on-write process can introduce significant latency into write operations. In order to isolate the active application from this latency, the FlashCopy indirection layer is placed logically below the cache.

Chapter 6. Advanced Copy Services

267

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

This means that the copy latency is typically only seen when de-staged from the cache, rather than for write operations from an application; otherwise, the copy operation may be blocked waiting for the write to complete. In Figure 6-10, we illustrate the logical placement of the FlashCopy indirection layer.

Figure 6-10 Logical placement of the FlashCopy indirection layer

6.4.9 FlashCopy rules


With SVC 5.1, the maximum number of supported FlashCopy mappings had been improved to 8192 per SVC cluster. The following rules have to be considered while defining FlashCopy mappings: There is a one-to-one mapping of the source VDisk to the target VDisk. One source VDisk can have 256 target VDisks. The source and target VDisks can be in different I/O groups of the same cluster. The minimum FlashCopy granularity is the entire VDisk. The source and target must be exactly equal in size. The size of a source and target VDisk cannot be altered (increased or decreased) after the FlashCopy mapping is created. There is a per I/O group limit of 1024 TB on the quantity of the source and target VDisk capacity that can participate in FlashCopy mappings.

6.4.10 FlashCopy and image mode disks


You can use FlashCopy with an image mode VDisk. Since the source and target VDisks must be exactly the same size when creating a FlashCopy mapping, a VDisk must be created with the exact same size as the image mode VDisk. To accomplish this task, use the command svcinfo lsvdisk -bytes VDiskName. The size in bytes is then used to create the VDisk to be used in the FlashCopy mapping. In Example 6-1 we list the size of the VDisk Image_VDisk_A. Subsequently, the VDisk VDisk_A_copy is created, specifying the same size.
Example 6-1 Listing the size of a VDisk in bytes and creating a VDisk of equal size

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_VDisk_A

268

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

id 8 name Image_VDisk_A IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name MDG_Image capacity 36.0GB type image . . . autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -size 36 -unit gb -name VDisk_A_copy -mdiskgrp MDG_DS47 -vtype striped -iogrp 1 Virtual Disk, id [19], successfully created Tip: Alternatively, the expandvdisksize and shrinkvdisksize VDisk commands can be used to modify the size of the commands support specification of the size in bytes. See 7.4.10, Expanding a VDisk on page 365 and 7.4.16, Shrinking a VDisk on page 369 for more information. An image mode VDisk can be used as either a FlashCopy source or target VDisk.

6.4.11 FlashCopy mapping events


In this section, we explain the series of events that modify the states of a FlashCopy. In Figure 6-11 on page 270, the FlashCopy mapping state diagram shows an overview of the states that apply to a FlashCopy mapping. Overview of a FlashCopy sequence of events: 1. Associate the source data set with a target location (one or more source and target VDisks). 2. Create a FlashCopy mapping for each source VDisk to the corresponding target VDisk. The target VDisk must be equal in size to the source VDisk. 3. Discontinue access to the target (application dependent). 4. Prepare (pre-trigger) the FlashCopy: a. Flush cache for the source. b. Discard cache for the target. 5. Start (trigger) the FlashCopy: a. Pause I/O (briefly) on the source. b. Resume I/O on the source. c. Start I/O on the target.

Chapter 6. Advanced Copy Services

269

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-11 FlashCopy mapping state diagram. Table 6-3 Mapping events Mapping event Create Description A new FlashCopy mapping is created between the specified source virtual disk (VDisk) and the specified target VDisk. The operation fails if any of the following is true: For SAN Volume Controller software Version 4.1.0 or earlier, the source or target VDisk is already a member of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source or target VDisk is already a target VDisk of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source VDisk is already a member of 16 FlashCopy mappings. For SAN Volume Controller software Version 4.3.0 or later, the source VDisk is already a member of 256 FlashCopy mappings. The node has insufficient bitmap memory. The source and target VDisks are different sizes. The prestartfcmap or prestartfcconsistgrp command is directed to either a consistency group for FlashCopy mappings that are members of a normal consistency group or to the mapping name for FlashCopy mappings that are stand-alone mappings. The prestartfcmap or prestartfcconsistgrp command places the FlashCopy mapping into the preparing state. Attention: The prestartfcmap or prestartfcconsistgrp command can corrupt any data that previously resided on the target VDisk because cached writes are discarded. Even if the FlashCopy mapping is never started, the data from the target might have logically changed during the act of preparing to start the FlashCopy mapping.

Prepare

270

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Mapping event Flush done

Description The FlashCopy mapping automatically moves from the preparing state to the prepared state after all cached data for the source is flushed and all cached data for the target is no longer valid. When all the FlashCopy mappings in a consistency group are in the prepared state, the FlashCopy mappings can be started. To preserve the cross volume consistency group, the start of all of the FlashCopy mappings in the consistency group must be synchronized correctly with respect to I/Os that are directed at the VDisks. This is achieved with the startfcmap or startfcconsistgrp command. The following occurs during the startfcmap or startfcconsistgrp commands run: New reads and writes to all source VDisks in the consistency group are paused in the cache layer until all ongoing reads and writes below the cache layer are completed. After all FlashCopy mappings in the consistency group are paused, the internal cluster state is set to allow FlashCopy operations. After the cluster state is set for all FlashCopy mappings in the consistency group, read and write operations are unpaused on the source VDisks. The target VDisks are brought online. As part of the startfcmap or startfcconsistgrp command, read and write caching is enabled for both the source and target VDisks. The following FlashCopy mapping properties can be modified: FlashCopy mapping name. Clean rate. Consistency group. Copy rate (for background copy). Automatic deletion of the mapping when the background copy is complete. There are two separate mechanisms by which a FlashCopy mapping can be stopped: You have issued a command. An I/O error has occurred. This command requests that the specified FlashCopy mapping be deleted. If the FlashCopy mapping is in the stopped state, the force flag must be used. If the flush of data from the cache cannot be completed, the FlashCopy mapping enters the stopped state. After all of the source data has been copied to the target and there are no dependent mappings, the state is set to copied. If the option to automatically delete the mapping after the background copy completes is specified, the FlashCopy mapping is automatically deleted. If this option is not specified, the FlashCopy mapping is not automatically deleted and can be reactivated by preparing and starting again. The node has failed.

Start

Modify

Stop

Delete

Flush failed Copy complete

Bitmap Online/Offline

6.4.12 FlashCopy mapping states


In this section, we explain the states of a FlashCopy mapping in more detail.

Chapter 6. Advanced Copy Services

271

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping exists between the source and target, but they behave as independent VDisks in this state.

Copying
The FlashCopy indirection layer governs all I/O to the source and target VDisks while the background copy is running. Reads and writes are executed on the target as though the contents of the source were instantaneously copied to the target during the startfcmap or startfcconsistgrp command. The source and target can be independently updated. Internally, the target depends on the source for some tracks. Read and write caching is enabled on the source and the target.

Stopped
The FlashCopy was stopped either by user command or by an I/O error. When a FlashCopy mapping is stopped, any useful data in the target VDisk is lost. Because of this, while the FlashCopy mapping is in this state, the target VDisk is in the offline state. In order to regain access to the target, the mapping must be started again (the previous PiT will be lost) or the FlashCopy mapping must be deleted. The source VDisk is accessible and read/write caching is enabled for the source. In the stopped state, a mapping can be prepared again or it can be deleted.

Stopping
The mapping is in the process of transferring data to an dependency mapping. The behavior of the target VDisk depends on whether the background copy process had completed while the mapping was in the copying state. If the copy process had completed, then the target VDisk remains online while the stopping copy process completes. If the copy process had not completed, then data in the cache is discarded for the target VDisk. The target VDisk is taken offline and the stopping copy process runs. When the data has been copied, then a stop complete asynchronous event is notified. The mapping will move to idle/copied state if the background copy has completed or to stopped if it has not. The source VDisk remains accessible for I/O.

Suspended
The target has been flashed from the source, and was in the copying or stopping state. Access to the metadata has been lost, and as a consequence, both source and target VDisks are offline. The background copy process has been halted. When the metadata becomes available again, the FlashCopy mapping will return to the copying or stopping state, access to the source and target VDisks will be restored, and the background copy or stopping process resumed. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in the cache, consuming resources, until the FlashCopy mapping leaves the suspended state.

Preparing
Since the FlashCopy function is placed logically below the cache to anticipate any write latency problem, it demands no read or write data for the target and no write data for the source in the cache at the time that the FlashCopy operation is started. This ensures that the resulting copy is consistent. 272
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Performing the necessary cache flush as part of the startfcmap or startfcconsistgrp command unnecessarily delays the I/Os received after the startfcmap or startfcconsistgrp command is executed, since these I/Os must wait for the cache flush to complete. To overcome this problem, SVC FlashCopy supports the prestartfcmap or prestartfcconsistgrp command, which prepares for a FlashCopy start while still allowing I/Os to continue to the source VDisk. In the preparing state, the FlashCopy mapping is prepared by the following steps: 1. Flushing any modified write data associated with the source VDisk from the cache. Read data for the source will be left in the cache. 2. Placing the cache for the source VDisk into write through mode, so that subsequent writes wait until data has been written to disk before completing the write command received from the using host. 3. Discarding any read or write data associated with the target VDisk from the cache. While in this state, writes to the source VDisk will experience additional latency because the cache is operating in write through mode. While the FlashCopy mapping is in this state, the target VDisk is reported as online, but will not perform reads or writes. These are failed by the SCSI front end. Before starting the FlashCopy mapping, it is important that any cache at the host level, for example, buffers in the host OSs or applications, are also instructed to flush any outstanding writes to the source VDisk.

Prepared
When in the prepared state, the FlashCopy mapping is ready to perform a start. While the FlashCopy mapping is in this state, the target VDisk is in the offline state. In the prepared state, writes to the source VDisk experience additional latency because the cache is operating in write through mode.

Summary of FlashCopy mapping states


Table 6-4 lists the various FlashCopy mapping states and the corresponding state of the source and target VDisks.
Table 6-4 FlashCopy mapping state summary State Online/offline Idling/Copied Copying Stopped Stopping Online Online Online Online Source Cache state Write-back Write-back Write-back Write-back Online/offline Online Online Offline Online if copy complete Offline if copy not complete Offline Online but not accessible Target Cache state Write-back Write-back -

Suspended Preparing

Offline Online

Write-back Write-through

Chapter 6. Advanced Copy Services

273

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

State Online/offline Prepared Online

Source Cache state Write-through Online/offline Online but not accessible

Target Cache state -

6.4.13 Space-efficient FlashCopy


You can have a mix of space-efficient and fully allocated VDisks in FlashCopy mappings. One common combination is a fully allocated source with a space-efficient target, which allows the target to consume a smaller amount of real storage than the source For best performance, the grain size of the space-efficient VDisk must match the grain size of the FlashCopy mapping. However, if the grain sizes are different, the mapping still proceeds. Consider the following information when you create your FlashCopy mappings: If you are using a fully allocated source with a space-efficient target, disable the background copy and cleaning mode on the FlashCopy map by setting both the background copy rate and cleaning rate to zero. Otherwise, if these features are enabled, all of the source is copied onto the target VDisk. This causes the space-efficient VDisk to either go offline or to grow as large as the source. If you are using only a space-efficient source, only the space that is used on the source VDisk is copied to the target VDisk. For example, if the source VDisk has a virtual size of 800 GB and a real size of 100 GB, of which 50 GB has been used, only the used 50 GB is copied.

Multiple space-efficient targets for FlashCopy


The SVC implementation of multiple target FlashCopy ensures that when new data is written to a source or target, that data is copied to zero or one other targets. A consequence of this implementation is that space-efficient VDisks can be used in conjunction with multiple target FlashCopy without causing allocations to occur on multiple targets when data is written to the source.

Space-efficient incremental FlashCopy


The implementation of space-efficient VDisks does not preclude the use of incremental FlashCopy on the same VDisks. It does not make much sense to have a fully allocated source VDisk and to use incremental FlashCopy to copy this to a space-efficient target VDisk; however, this combination is possible. Two more interesting combinations of incremental FlashCopy and space-efficient VDisks are: A space-efficient source VDisk can be incrementally FlashCopied to a space-efficient target VDisk. Whenever the FlashCopy is retriggered, only data that has been modified is re-copied to the target. Note that if space is allocated on the target because of I/O to the target VDisk, this space is not reclaimed when the FlashCopy is retriggered. A fully allocated source VDisk can be incrementally FlashCopied to another fully allocated VDisk at the same time as being copied to multiple space-efficient targets (taken at different points in time). This allows a single full backup to be kept for recovery purposes and to separate the backup workload from the production workload and at the same time allowing older space-efficient backups to be retained.

Migration from and to a Space-efficient VDisk


There are different scenarios to migrate a non-space-efficient VDisk to a space-efficient VDisk and we describe migration fully in Chapter 9, Data migration on page 675 274
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

6.4.14 Background copy


The FlashCopy background feature enables you to copy all data in a source VDisk to the corresponding target VDisk. Without the FlashCopy background feature, only data that changed on the source VDisk can be copied to the target VDisk. The benefit of using a FlashCopy mapping with background copy enabled is that the target VDisk becomes a real clone (independent from the source VDisk) of the FlashCopy mapping source VDisk. The background copy rate is a property of a FlashCopy mapping that is expressed as a value between 0 and 100. It can be changed in any FlashCopy mapping state and can be different in the mappings of one consistency group. A value of 0 disables background copy. The relationship of the background copy rate value to the attempted number of grains to be split (copied) per second is shown in Table 6-5.
Table 6-5 Background copy rate Value 1-10 11-20 21-30 31-40 41-50 51-60 61-70 71-80 81-90 91-100 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 4 MB 8 MB 16 MB 32 MB 64 MB Grains per second 0.5 1 2 4 8 16 32 64 128 256

The grains per second numbers represent the maximum number of grains the SVC will copy per second, assuming that the bandwidth to the MDisks can accommodate this rate. If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the SVC nodes to the MDisks, then background copy I/O contends for resources on an equal basis with I/O arriving from hosts. Both tend to see an increase in latency, and a consequential reduction in throughput. Both background copy and foreground I/O continue to make forward progress, and do not stop, hang, or cause the node to fail. The background copy is performed by both nodes of the I/O group in which the source VDisk resides.

6.4.15 Synthesis
The FlashCopy functionality in SVC simply creates copy VDisks. All the data in the source VDisk is copied to the destination VDisk. This includes operating system control information as well as application data and metadata. Some operating systems are unable to use FlashCopy without an additional step, which is termed synthesis. In summary, synthesis performs a type of transformation on the operating system metadata in the target VDisk so that the operating system can use the disk.

Chapter 6. Advanced Copy Services

275

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.4.16 Serialization of I/O by FlashCopy


In general, the FlashCopy function in the SVC introduces no explicit serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and target VDisks. However, there is a lock for each grain. The lock can be shared or exclusive. For multiple targets, a common lock is shared and the mappings are derived from a particular source VDisk. The lock is taken in the following modes under the following conditions: The lock is taken shared for the duration of a read from the target VDisk which touches a grain that is not split. The lock is taken exclusive during a grain split. This happens prior to FlashCopy actioning any destage (or write through) from the cache to a grain that is going to be split (the destage waits for the grain to be split). The lock is held during the grain split and released before the destage is processed. If the lock is held shared, and another process wants to take the lock shared, then this request is granted unless a process is already waiting to take the lock exclusive. If the lock is held shared and it is requested exclusive, then the requesting process must wait until all holders of the shared lock free it. Similarly, if the lock is held exclusive, then a process wanting to take the lock in either shared or exclusive mode must wait for it to be freed.

6.4.17 Error handling


When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not affect the error handling or reporting of errors in the I/O path. Only when a FlashCopy mapping is copying or stopping are error handling and reporting affected by FlashCopy. We describe these scenarios in the following sections.

Node failure
Normally, two copies of the FlashCopy bitmaps are maintained, one on each of the two nodes making up the I/O group of the source VDisk. When a node fails, one copy of the bitmaps, for all FlashCopy mappings whose source VDisk is a member of the failing nodes I/O group, will become inaccessible. FlashCopy will continue with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the source I/O group. The cluster metadata is updated to indicate that the missing node no longer holds up-to-date bitmap information. When the failing node recovers, or a replacement node is added to the I/O group, up-to-date bitmaps will be re-established on the new node, and it will once again provide a redundant location for the bitmaps. When the FlashCopy bitmap becomes available again (at least one of the SVC nodes in the I/O group is accessible), the FlashCopy mapping will return to the copying state, access to the source and target VDisks will be restored, and the background copy process resumed. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in the cache until the FlashCopy mapping leaves the suspended state. Normally, two copies of the FlashCopy bitmaps are maintained (in non-volatile memory), one on each of the two SVC nodes making up the I/O group of the source VDisk. If only one of the SVC nodes in the I/O group that the source VDisk belongs to goes offline, then the FlashCopy mapping will continue in the copying state, with a single copy of the

276

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

FlashCopy bitmap. When the failed SVC node recovers, or a replacement SVC node is added to the I/O group, up-to-date FlashCopy bitmaps will be re-established on the resuming SVC node, and again provide a redundant location for the FlashCopy bitmaps. Note: If both nodes in the I/O group to which the target VDisk belongs become unavailable, then the host cannot access the target VDisk.

Path failure (path offline state)


In a fully functioning cluster, all nodes have a software representation of every VDisk in the cluster within their application hierarchy. Since the SAN that links the SVC nodes to each other and to the managed disks is made up of many independent links, it is possible for some subset of the nodes to be temporarily isolated from some of the managed disks. When this happens, the managed disks are said to be path offline on some nodes. Note: Other nodes might see the managed disks as online, because their connection to the managed disks is still functioning. When a managed disk enters the path offline state on an SVC node, all the VDisks that have any extents on the managed disk also become path offline. Again, this happens only on the affected nodes. When a VDisk is path offline on a particular SVC node, this means that host access to that VDisk through the node will fail with the SCSI sense indicating offline.

Path offline for the source VDisk


If a FlashCopy mapping is in the copying state and the source VDisk goes path offline, then this path offline state is propagated to all target VDisks up to but not including the target VDisk for the newest mapping that is 100% copied but remains in the copying state. If no mappings are 100% copied, then all target VDisks will be taken offline. Again, note that path offline is a state that exists on a per-node basis. Other nodes may not be affected. If the source VDisk comes online, then the target and source VDisks are brought back online.

Path offline for the target VDisk


If a target VDisk goes path offline, but the source VDisk is still online, and if there are any dependent mappings, then those target VDisks will also go path offline. The source VDisk will remain online.

6.4.18 Asynchronous notifications


FlashCopy raises informational error logs when mappings or consistency groups make certain state transitions. These state transitions occur as a result of configuration events that complete asynchronously, and the informational errors can be used to generate Simple Network Management Protocol (SNMP) traps to notify the user. Other configuration events complete synchronously, and no informational errors are logged as a result of these events: PREPARE_COMPLETED: This is logged when the FlashCopy mapping or consistency group enters the prepared state as a result of a user request to prepare. The user can now start (or stop) the mapping or consistency group. COPY_COMPLETED: This is logged when the FlashCopy mapping or consistency group enters the idle_or_copied state when it was previously in the copying or stopping state. This indicates that the target disk now contains a complete copy and no longer depends on the source.
Chapter 6. Advanced Copy Services

277

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

STOP_COMPLETED: This is logged when the FlashCopy mapping or consistency group has entered the stopped state as a result of a user request to stop. It will be logged once the automatic copy process has completed. This includes mappings where no copying needed to be performed. It is different from the error that is logged when a mapping or group enters the stopped state as a result of an IO error.

6.4.19 Interoperation with Metro Mirror and Global Mirror


FlashCopy can work together with Metro Mirror and Global Mirror to provide better protection of data. For example, we can perform a Metro Mirror to duplicate data from Site_A to Site_B, then do a daily FlashCopy and copy it elsewhere. Table 6-6 details which combinations of FlashCopy and Remote Copy are supported. In the table, Remote Copy refers to Metro Mirror and Global Mirror.
Table 6-6 FlashCopy Remote Copy interaction Component FlashCopy Source Remote Copy Primary Supported Remote Copy Secondary Supported Note: When the FlashCopy relationship is in the preparing and prepared states, the cache at the Remote Copy secondary site will be operating in write through mode. This will add additional latency to the already latent Remote Copy Relationship. Not Supported

FlashCopy Destination

Not supported

6.4.20 Recovering data from FlashCopy


FlashCopy can be used to recover the data if some form of corruption has happened. For example, if a user deletes some data by mistake, you can map the FlashCopy target VDisks to the application server, and import all the logical volume level configurations, start the application, and restore the data back to a given point in time. Tip: It is better to map a FlashCopy target VDisk to a backup machine with the same application installed. We do not recommend that you map a FlashCopy target VDisk to the same application server that the FlashCopy source VDisk is mapped. The reason is that the FlashCopy target and source VDisk have the same signature, pvid, vgda, and so on. Special steps will be necessary to handle the conflict at OS level. For example, you can use the command recreatevg in AIX to generate different vg, lv, file system, and so on, names in order to avoid a naming conflict. FlashCopy backup is a disk-based backup copy that can be used to restore service more quickly than other backup techniques. This application is further enhanced by the ability to maintain multiple backup targets, spread over a range of time, allowing the user to choose a backup from before the time of corruption.

278

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

6.5 Metro Mirror


In the topics that follow we describe the Metro Mirror copy service, which is a synchronous remote copy function. Metro Mirror in SVC is similar to Metro Mirror in the IBM System Storage DS family. SVC provides a single point of control when enabling Metro Mirror in your SAN regardless of the disk subsystems used. The general application of Metro Mirror is to maintain two real-time synchronized copies of a disks. Often, two copies are geographically dispersed to two SVC clusters, although it is possible to use Metro Mirror in a single cluster (within an I/O group). If the primary copy fails, a secondary copy can be enabled for I/O operation. Tips: Intracluster Metro Mirror will consume more resources for a specific cluster, compared to an intercluster Metro Mirror relationship. We recommend intercluster Metro Mirror when possible. A typical application of this function is to set up a dual-site solution using two SVC clusters where the first site is considered the primary or production site, and the second site is considered the backup site or failover site, which is activated when a failure at the first site is detected.

6.5.1 Metro Mirror overview


Metro Mirror works by establishing a Metro Mirror relationship between two VDisks of equal size. To maintain data integrity for dependency writes, you can use consistency groups to group a number of Metro Mirror relationships together, similar to FlashCopy consistency groups. SVC provides both intracluster and intercluster Metro Mirror.

Intracluster Metro Mirror


Intracluster Metro Mirror can be applied within any single I/O group. Metro Mirror across I/O groups in the same SVC cluster is not supported, since intracluster Metro Mirror can only be performed between VDisks in the same I/O group.

Intercluster Metro Mirror


Intercluster Metro Mirror operations requires a pair of SVC clusters that are separated by a number of moderately high bandwidth links. Two SVC clusters must be defined in an SVC partnership, which must be performed on both SVC clusters to establish a fully functional Metro Mirror partnership. Using standard single mode connections, the supported distance between two SVC clusters in a Metro Mirror partnership is 10 km, although greater distances can be achieved by using extenders. For extended distance solutions, contact your IBM representative. Note: When a local and a remote fabric are connected together for Metro Mirror purposes, then ISL hop count between a local node and a remote node cannot exceed seven.

Chapter 6. Advanced Copy Services

279

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.5.2 Remote copy techniques


Metro Mirror is a synchronous remote copy, which is briefly explained below. To illustrate the differences between synchronous and asynchronous remote copy, asynchronous remote copy is also explained.

Synchronous remote copy


Metro Mirror is a fully synchronous remote copy technique that ensures that, as long as writes to the secondary VDisks are possible, writes are committed at both primary and secondary VDisks before the application is given acknowledgement of completion of a write. Errors, such as a loss of connectivity between the two clusters, can mean that it is not possible to replicate data from the primary VDisk to the secondary VDisk. In this case, Metro Mirror operates to ensure that a consistent image is left at the secondary VDisk, and then continues to allow I/O to the primary VDisk, so as not to impact the operations at the production site. Figure 6-12 illustrates how a write to the master VDisk is mirrored to the cache of the auxiliary VDisk before an acknowledgement of write is sent back to the host that issued the write. This ensures that the secondary is real-time synchronized, in case it is needed in a failover situation. However, this also means that the application is fully exposed to the latency and bandwidth limitations (if any) of the communication link to the secondary site. This might lead to unacceptable application performance, particularly when placed under peak load. This is the reason for distance limitations when using Metro Mirror.

Figure 6-12 Write on VDisk in Metro Mirror relationship

280

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

6.5.3 SVC Metro Mirror features


SVC Metro Mirror supports following features: Synchronous remote copy of VDisks dispersed over metropolitan scale distances is supported. SVC implements Metro Mirror relationships between VDisk pairs, with each VDisk in a pair managed by an SVC cluster. SVC supports intracluster Metro Mirror, where both VDisks belong to the same cluster (and IO group). SVC supports intercluster Metro Mirror, where each VDisk belongs to their separate SVC cluster. A given SVC cluster can be configured for partnership with another cluster. All intercluster Metro Mirror processing takes place between two SVC clusters configured in a partnership. Intercluster and intracluster Metro Mirror can be used concurrently within a cluster for different relationships. SVC does not require a control network or fabric to be installed to manage Metro Mirror. For intercluster Metro Mirror, SVC maintains a control link between two clusters. This control link is used to control state and coordinate updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Metro Mirror I/O. SVC implements a configuration model that maintains the Metro Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC maintains and polices a strong concept of consistency and makes this available to guide configuration activity. SVC implements flexible resynchronization support enabling it to re-synchronize VDisk pairs that have suffered write I/O to both disks and to resynchronize only those regions that are known to have changed.

6.5.4 Multi-Cluster-Mirroring (MCM)


With the introduction of Multi-Cluster-Mirroring (MCM) in SVC 5.1, a cluster may be configured with multiple partner clusters. Multiple Cluster Mirroring enables Metro Mirror and Global Mirror relationships to exist between a maximum of four SVC clusters. The SVC clusters may take advantage of the maximum number of remote mirror relationships because MCM enables customers to copy from several remote sites to a single SVC cluster at a disaster recovery site. It supports implementation of consolidated DR strategies and helps customers who are moving or consolidating data centers. Figure 6-13 on page 282 shows an example of MCM configuration.

Chapter 6. Advanced Copy Services

281

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-13 MCM configuration example

Supported Multi-Cluster topologies


Prior to SVC 5.1, the allowed cluster topologies were: A (no partnership configured) or A B (one partnership configured) With Multi-Cluster Mirroring, there is a wider range of possible topologies. A maximum of four clusters may be connected, directly or indirectly. This implies that a cluster may never have any more than three partners. For example, these topologies are allowed: A B, A C, A D Figure 6-14 on page 283 shows a star topology.

282

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Figure 6-14 SVC star topology

This shows four clusters in a star topology, with A at the centre. Cluster A could be a central disaster-recovery site for the three other locations. Using a star topology it is possible to migrate different applications at different times using a process such as: 1. Suspend application at A 2. Remove the A B relationship 3. Create the A C relationship (or alternatively the B C relationship) 4. Synchronize to cluster C, and ensure A C is established A B, A C, A D, B C, B D, C D A B, A C, B C Figure 6-15 shows a triangle topology.

Figure 6-15 SVC triangle topology

There are three clusters in a triangle topology. Figure 6-16 shows a fully-connected topology.

Chapter 6. Advanced Copy Services

283

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-16 SVC fully-connected topology

This is a fully-connected mesh where every cluster has partnership to each of the three others. This means VDisks could be replicated between any pair of clusters, but note that this is not required, unless relationships are needed between every pair of clusters. A B, B C, C D The other option is a daisy-chain between four clusters, where we can have a type of cascading solution however a VDisk must be in only one relationship such as A->B for example. At the time of writing a three site solution such as DS8000 MGM is not supported. Figure 6-17 shows a daisy-chain topology.

Figure 6-17 SVC daisy-chain topology

Unsupported topology
As an illustration of what is not supported, we show this example: A B, B C, C D, D E Figure 6-18 shows this unsupported topology.

Figure 6-18 SVC unsupported topology

284

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

This is unsupported, because five clusters are indirectly connected. If the cluster can detect this at the time of the fourth mkpartnership command, it will be rejected. Note: The introduction of Multi-Cluster Mirroring necessitates some upgrade restrictions: Concurrent Code Upgrade (CCU) to 5.1.0 is supported from 4.3.1.x only. If the cluster is in a partnership, then the partnered cluster must meet a minimum software level to allow concurrent IO: the partnered cluster must be running 4.2.1 or higher

6.5.5 Metro Mirror relationship


A Metro Mirror relationship is composed of two VDisks equal in size. The master VDisk and auxiliary VDisk can be in same I/O group, within the same SVC cluster (intracluster Metro Mirror), or can be on separate SVC clusters that are defined as SVC partners (intercluster Metro Mirror). Note: Be aware that: A VDisk can only be part of one Metro Mirror relationship at a time. A VDisk that is a FlashCopy target cannot be part of a Metro Mirror relationship. Figure 6-19 illustrates the Metro Mirror relationship.

Figure 6-19 Metro Mirror relationship

Metro Mirror relationship between primary and secondary VDisks


When creating a Metro Mirror relationship, one VDisk should be defined as the master, and the other as the auxiliary. The relationship between two copies is symmetric. When a Metro Mirror relationship is created, the master VDisk is initially considered the primary copy (often referred to as the source), and the auxiliary VDisk is considered the secondary copy (often referred to as target). This implies that the initial copy direction is mirroring the master VDisk to the auxiliary VDisk. After the initial synchronization is complete, the copy direction can be changed if appropriate. In most common applications of Metro Mirror, the master VDisk contains the production copy of the data, and is used by the host application, while the auxiliary VDisk contains a mirrored copy of the data and is used for failover in disaster recovery scenarios. The terms master and auxiliary help support this use. However, if Metro Mirror is applied differently, the terms master and auxiliary VDisks need to be interpreted appropriately.

6.5.6 Importance of write ordering


Many applications that use block storage have a requirement to survive failures, such as loss of power or a software crash, and not lose data that existed prior to the failure. Since many
Chapter 6. Advanced Copy Services

285

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

applications need to perform large numbers of update operations in parallel with storage, maintaining write ordering is key to ensuring correct operation of applications following a disruption. An application, for example, databases, that is performing a large set of updates is usually designed with the concept of dependent writes. These are writes where it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine applications algorithms and can lead to problems, such as detected, or undetected, data corruption.

Dependent writes that span multiple VDisks


The following scenario illustrates a simple example of a sequence of dependent writes, and in particular, what can happen if they span multiple VDisks. Consider the following typical sequence of writes for a database update transaction: 1. A write is executed to update the database log, indicating that a database update will be performed. 2. A second write is executed to update the database. 3. A third write is executed to update the database log, indicating that a database update has completed successfully. Figure 6-20 on page 286 shows the write sequence.

Figure 6-20 Dependent writes for a database

The database ensures correct ordering of these writes by waiting for each step to complete before starting the next.

286

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Note: All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored to a point beyond the last full, offline backup, logs are required to roll the data forward to the point of failure. But imagine if the database log and database itself are on different VDisks and a Metro Mirror relationship is stopped during this update. In this case, you need to consider the possibility that the Metro Mirror relationship for the VDisk with the database file is stopped slightly before the VDisk containing the database log. If this were the case, then it could be possible that the secondary VDisks see writes (1) and (3), but not (2). Then, if the database was restarted using data available from secondary disks, the database log would indicate that the transaction had completed successfully, when that is not the case. In this scenario, the integrity of the database is in question.

Metro Mirror consistency groups


Metro Mirror consistency groups address the issue of dependent writes across VDisks, where the objective is to preserve data consistency across multiple Metro Mirrored VDisks. Consistency groups ensure a consistent data set, because applications have relational data spanning across multiple VDisks. A Metro Mirror consistency group can contain an arbitrary number of relationships up to the maximum number of Metro Mirror relationships supported by the SVC Cluster. Metro Mirror commands can be issued to a Metro Mirror consistency group, and thereby simultaneously for all Metro Mirror relationships defined within that consistency group, or to a single Metro Mirror relationship that is not part of a Metro Mirror consistency group. For example, when issuing a Metro Mirror startrcconsistgrp command to the consistency group, all of the Metro Mirror relationships in the consistency group are started at the same time. The concept of Metro Mirror consistency groups is illustrated In Figure 6-21. Since the MM_Relationship 1 and 2 are part of the consistency group, they can be handled as one entity, while the stand-alone MM_Relationship 3 is handled separately.

Chapter 6. Advanced Copy Services

287

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-21 Metro Mirror consistency group

Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror consistency groups can provide the ability to group relationships, so that they are manipulated in unison. Metro Mirror relationships within a consistency group can be in any form: Metro Mirror relationships can be part of a consistency group, or be stand-alone and therefore handled as single instances. A consistency group can contain zero or more relationships. An empty consistency group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All the relationships in a consistency group must have matching master and auxiliary SVC clusters. Although it is possible to use consistency groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to some undesired side effects. The rules behind a consistency group means that certain configuration commands are prohibited. Where this would not be the case was if the relationship was not part of a consistency group. For example, consider the case of two applications that are completely independent, yet they are placed into a single consistency group. In the event of an error, there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Metro Mirror rejects attempts to enable access to secondary VDisks of either application. If one application finishes its background copy much more quickly than the other, Metro Mirror still refuses to grant access to its secondary, even though it is safe in this case, 288
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

because Metro Mirror policy is to refuse access to the entire consistency group if any part of it is inconsistent. Stand-alone relationships and consistency groups share a common configuration and state model. All the relationships in a non-empty consistency group have same state as the consistency group.

6.5.7 How Metro Mirror works


In the sections that follow, we describe how Metro Mirror works.

Intercluster communication and zoning


All intercluster communication is performed over the SAN. Prior to creating intercluster Metro Mirror relationships, you must create a partnership between the two clusters. SVC node ports on each SVC cluster must be able to access each other to facilitate the partnership creation. Therefore, a zone in each fabric must be defined for intercluster communication (see Chapter 3, Planning and configuration on page 63).

SVC cluster partnership


Each SVC cluster can only be in a partnership with between one and three other SVC clusters. When an SVC cluster partnership has been defined on both of a pair of clusters, further communication facilities between nodes in each of the cluster are established. This comprises: A single control channel, which is used to exchange and coordinate configuration information I/O channels between each of these nodes in clusters These channels are maintained and updated as nodes appear and disappear and as links fail, and are repaired to maintain operation where possible. If communication between SVC clusters is interrupted or lost, an error is logged (and consequently Metro Mirror relationships will stop). To handle error conditions, SVC can be configured to raise SNMP traps to the enterprise monitoring system.

Maintenance of intercluster link


All SVC nodes maintain a database of other devices that are visible on the fabric. This is updated as devices appear and disappear. Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement clustering and functional protocols of SVC. Nodes that are in different clusters do not exchange messages after initial discovery is complete, unless they have been configured together to perform Metro Mirror. The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes.

Chapter 6. Advanced Copy Services

289

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

If the designated node should fail (or all its logins to the remote cluster fail), then a new node is chosen to carry control traffic. This causes I/O to pause, but does not cause relationships to become Consistent Stopped.

6.5.8 Metro Mirror process


There are several major steps in the Metro Mirror process: 1. An SVC cluster partnership is created between two SVC clusters (for intercluster Metro Mirror). 2. A Metro Mirror relationship is created between two VDisks of the same size. 3. To manage multiple Metro Mirror relationships as one entity, relationships can be made part of a Metro Mirror consistency group. This ensures data consistency across multiple Metro Mirror relationships, or simply for ease of management. 4. When a Metro Mirror relationship is started, and when the background copy has completed, the relationship becomes consistent and synchronized. 5. Once synchronized, the secondary VDisk holds a copy of the production data at the primary, which can be used for disaster recovery. 6. To access the auxiliary VDisk, the Metro Mirror relationship must be stopped with the access option enabled before write I/O is submitted to the secondary. 7. The remote host server is mapped to the auxiliary VDisk and the disk is available for I/O.

6.5.9 Methods of synchronization


This section describes three methods that can be used to establish a relationship.

Full synchronization after creation


This is the default method. It is the simplest in that it requires no administrative activity apart from issuing the necessary commands. However, in some environments, the bandwidth available will make this method unsuitable. The command sequence for a single relationship is as follows: 1. Run mkrcrelationship without specifying the -sync option. 2. Run startrcrelationship without specifying the -clean option

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary VDisks contain identical data before creating the relationship. There are two ways in which this might be done: Both disks are created with the security delete feature so as to make all data zero. A complete tape image (or other method of moving data) is copied from one disk to the other. In either technique, no write I/O must take place to either the master or the auxiliary before the relationship is established. Then, the administrator must: Run mkrcrelationship with the -sync flag. Run startrcrelationship without the -clean flag.

290

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

If these steps are not performed correctly, then Metro Mirror will report the relationship as being consistent when it is not. This is likely to make any secondary disk useless. This method has an advantage over full synchronization, in that it does not require all the data to be copied over a constrained link. However, if data needs to be copied, the master and auxiliary disks cannot be used until the copy is complete, which might be unacceptable.

Quick synchronization after creation


In this method, the administrator must still copy data from the master to the auxiliary, but it can be used without stopping the application at the master. The administrator must ensure that: A mkrcrelationship command is issued with the -sync flag. A stoprcrelationship command is issued with the -access flag. A tape image (or other method of transferring data) is used to copy the entire master disk to the auxiliary disk. Once the copy is complete, administrator must ensure that: A startrcrelationship command is issued with the -clean flag. With this technique, only data that has changed since the relationship was created, including all regions that were incorrect in the tape image, are copied from the master and the auxiliary. As with Synchronized before creation on page 290, the copy step must be performed correctly or the auxiliary will be useless, although the copy operation will report it as being synchronized.

Metro Mirror states and events


In this section, we explain the different states of a Metro Mirror relationship, and the series of events that modify these states. In Figure 6-22, the Metro Mirror relationship state diagram shows an overview of states that can apply to a Metro Mirror relationship in a connected state.

Chapter 6. Advanced Copy Services

291

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-22 Metro Mirror mapping state diagram

When creating the Metro Mirror relationship, you can specify if the auxiliary VDisk is already in sync with the master VDisk, and the background copy process is then skipped. This is especially useful when creating Metro Mirror relationships for VDisks that have been created with the format option. Create the relationship as follows: 1. Step 1 is done as follows: a. The Metro Mirror relationship is created with the -sync option and the Metro Mirror relationship enters the Consistent stopped state. b. The Metro Mirror relationship is created without specifying that the master and auxiliary VDisks are in sync, and the Metro Mirror relationship enters the Inconsistent stopped state. 2. Step 2 is done as follows: a. When starting a Metro Mirror relationship in the Consistent stopped state, it enters the Consistent synchronized state. This implies that no updates (write I/O) have been performed on the primary VDisk while in the Consistent stopped state, otherwise the -force option must be specified, and the Metro Mirror relationship then enters the Inconsistent copying state, while the background copy is started. b. When starting a Metro Mirror relationship in the Inconsistent stopped state, it enters the Inconsistent copying state, while the background copy is started.

292

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

3. Step 3 is done as follows: When the background copy completes, the Metro Mirror relationship transits from the Inconsistent copying state to the Consistent synchronized state. 4. Step 4 is done as follows: a. When stopping a Metro Mirror relationship in the Consistent synchronized state, specifying the -access option, which enables write I/O on the secondary VDisk, the Metro Mirror relationship enters the Idling state. b. To enable write I/O on the secondary VDisk, when the Metro Mirror relationship is in the Consistent stopped state, issue the command svctask stoprcrelationship specifying the -access option, and the Metro Mirror relationship enters the Idling state. Step 5 is done as follows: a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary VDisk) while in the Idling state, the Metro Mirror relationship enters the Consistent synchronized state. b. If write I/O has been performed to either the master or the auxiliary VDisk, then the -force option must be specified, and the Metro Mirror relationship then enters the Inconsistent copying state, while the background copy is started. Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied: For example, this means that the Metro Mirror relationships in the Consistent synchronized state enter the Consistent stopped state and the Metro Mirror relationships in the Inconsistent copying state enter the Inconsistent stopped state. In case the connection is broken between the SVC clusters in a partnership, then all (intercluster) Metro Mirror relationships enter a disconnected state. For further information, refer to Connected versus disconnected on page 293. Note: Stand-alone relationships and consistency groups share a common configuration and state model. This means that all Metro Mirror relationships in a non-empty consistency group have the same state as the consistency group.

6.5.10 State overview


SVC defined concepts of state are key to understanding configuration concepts and are therefore explained in more detail below.

Connected versus disconnected


This distinction can arise when a Metro Mirror relationship is created with the two VDisks in different clusters. Under certain error scenarios, communications between the two clusters might be lost. For example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other. When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected.

Chapter 6. Advanced Copy Services

293

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

In this scenario, each cluster is left with half the relationship and has only a portion of the information that was available to it before. Some limited configuration activity is possible, and is a subset of what was possible before. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship, and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected once again. Metro Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state it was in when it became disconnected or it can enter a different connected state. Relationships that are configured between VDisks in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain VDisks operating as secondaries can be described as being consistent or inconsistent. Consistency groups that contain relationships can also be described as being consistent or inconsistent. The consistent or inconsistent property describes the relationship of the data on the secondary to the one on the primary VDisk. It can be considered a property of the secondary VDisk itself. A secondary is described as consistent if it contains data that could have been read by a host system from the primary if power had failed at some imaginary point in time while I/O was in progress and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the primary up to the recovery point: The secondary VDisk contains the data from all writes to the primary for which the host received good completion and that data had not been overwritten by a subsequent write (before the recovery point). For writes for which the host did not receive good completion (that is, it received bad completion or no completion at all), and the host subsequently performed a read from the primary of that data and that read returned good completion and no later write was sent (before the recovery point), the secondary contains the same data as that returned by the read from the primary. From the point of view of an application, consistency means that a secondary VDisk contains the same data as the primary VDisk at the recovery point (the time at which the imaginary power failure occurred). If an application is designed to cope with unexpected power failure, this guarantee of consistency means that the application will be able to use the secondary and begin operation just as though it had been restarted after the hypothetical power failure. Again, the application is dependent on the key properties of consistency: Write ordering Read stability for correct operation at the secondary If a relationship, or set of relationships, is inconsistent and an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an error code. The application might fail to detect that the data is corrupt and return erroneous data.

294

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Metro Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a consistency group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems; therefore, consistency must operate across all those disks. When deciding how to use consistency groups, the administrator must consider the scope of an applications data, taking into account all the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, then either of the following actions might occur: All the data accessed by the group of systems must be placed into a single consistency group. The systems must be recovered independently (each within its own consistency group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the primary and secondary VDisks are only different in regions where writes are outstanding from the host. Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at some point in time in the past. Write I/O might have continued to a primary and not have been copied to the secondary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the secondary. When communication is lost for an extended period of time, Metro Mirror tracks the changes that happen at the primary, but not the order of such changes, or the details of such changes (write data). When communication is restored, it is impossible to make the secondary synchronized without sending write data to the secondary out-of-order, and therefore losing consistency. Two policies can be used to cope with this: Make a point-in-time copy of the consistent secondary before allowing the secondary to become inconsistent. In the event of a disaster before consistency is achieved again, the point-in-time copy target provides a consistent, though out-of-date, image. Accept the loss of consistency, and loss of useful secondary, while making it synchronized.

6.5.11 Detailed states


The following sections detail the states that are portrayed to the user, for either consistency groups or relationships. It also details the extra information available in each state. The different major states are constructed to provide guidance on the configuration commands that are available.

Chapter 6. Advanced Copy Services

295

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

InconsistentStopped
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either. A copy process needs to be started to make the secondary consistent. This state is entered when the relationship or consistency group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or consistency group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transits to IdlingDisconnected.

InconsistentCopying
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or consistency group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or consistency group. In this state, a background copy process runs that copies data from the primary to the secondary virtual disk. In the absence of errors, an InconsistentCopying relationship is active, and the Copy Progress increases until the copy process completes. In some error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or consistency group into an InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a consistency group, the relationship or consistency group transits to ConsistentSynchronized state. If the relationship or consistency group becomes disconnected, then the secondary side transits to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

ConsistentStopped
This is a connected state. In this state, the secondary contains a consistent image, but it might be out-of-date with respect to the primary. This state can arise when a relationship was in a Consistent Synchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE. Normally, following an I/O error, subsequent write activity cause updates to the primary and the secondary is no longer synchronized (set to FALSE). In this case, to re-establish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this, and the relationship or consistency group transits to InconsistentCopying. Do this only after all outstanding errors are repaired. In the unusual case where the primary and secondary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the

296

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or consistency group to ConsistentSynchronized and reverses the roles of the primary and secondary. If the relationship or consistency group becomes disconnected, then the secondary side transits to ConsistentDisconnected. The primary side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or consistency group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.

ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O, and the secondary VDisk is accessible for read-only I/O. Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks. Either good completion must be received for both writes, the write must be failed to the host, or a state must transit out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the primary and secondary roles. A start command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the same transitions are made as for ConsistentStopped.

Idling
This is a connected state. Both master and auxiliary disks are operating in the primary role. Consequently, both are accessible for write I/O. In this state, the relationship or consistency group accepts a start command. Metro Mirror maintains a record of regions on each disk that received write I/O while Idling. This is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either VDisk in any relationship has received write I/O. This is indicated by the synchronized status. If the start command leads to loss of consistency, then the -force parameter must be specified. Following a start command, the relationship or consistency group transits to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is such a loss. Also, while in this state, the relationship or consistency group accepts a -clean option on the start command. If the relationship or consistency group becomes disconnected, then both sides change their state to IdlingDisconnected.

IdlingDisconnected
This is a disconnected state. The VDisk or disks in this half of the relationship or consistency group are all in the primary role and accept read or write I/O.
Chapter 6. Advanced Copy Services

297

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

The main priority in this state is to recover the link and make the relationship or consistency group connected once more. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transits to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or consistency group, which depends on: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, then the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transits from TRUE to FALSE) and the relationship was not already stopped (either through user stop or a persistent error), then an error log is raised to notify you of this situation. This error log is the same as that raised when the same situation arises for ConsistentSynchronized.

InconsistentDisconnected
This is a disconnected state. The virtual disks in this half of the relationship or consistency group are all in the secondary role and do not accept read or write I/O. No configuration activity except for deletes is permitted until the relationship becomes connected again. When the relationship or consistency group becomes connected again, the relationship becomes InconsistentCopying automatically unless either: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or consistency group becomes InconsistentStopped.

ConsistentDisconnected
This is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the secondary side of a relationship becomes disconnected. In this state, the relationship or consistency group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or consistency group was known to be consistent. This corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to TRUE transits the relationship or consistency group to the IdlingDisconnected state. This allows write I/O to be performed to the secondary VDisk and is used as part of a disaster recovery scenario.

298

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

When the relationship or consistency group becomes connected again, the relationship or consistency group becomes ConsistentSynchronized only if this does not lead to a loss of Consistency. This is the case provided: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the primary while disconnected. Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to consistency groups. It is the state of a consistency group that has no relationships and no other state information to show. It is entered when a consistency group is first created. It is exited when the first relationship is added to the consistency group, at which point the state of the relationship becomes the state of the consistency group.

Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a Status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between all the nodes that are performing background copy for one of the eligible relationships. This allocation is made irrespective of the number of disks that the node is responsible for. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.

6.5.12 Practical use of Metro Mirror


The master VDisk is the production VDisk and updates to this copy are mirrored in real-time to the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was created are destroyed. Note: The copy direction for a Metro Mirror relationship can be switched so the auxiliary VDisk becomes the primary, and the master VDisk becomes the secondary. While the Metro Mirror relationship is active, the secondary copy (VDisk) is not accessible for host application write I/O at any time. The SVC allows read-only access to the secondary VDisk when it contains a consistent image. This is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimum delay if required. For example, many operating systems need to read Logical Block Address (LBA) zero to configure a logical unit. Although read access is allowed at the secondary in practice, the data on the secondary volumes cannot be read by a host. The reason for this is that most operating systems write a dirty bit to the file system when it is mounted. Because this write operation is not allowed on the secondary volume, the volume cannot be mounted. This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads performed at the secondary and later write I/Os performed at the primary.

Chapter 6. Advanced Copy Services

299

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

To enable access to the secondary VDisk for host operations, the Metro Mirror relationship must be stopped by specifying the -access parameter. While access to the secondary VDisk for host operations is enabled, the host must be instructed to mount the VDisk and related tasks before the application can be started, or instructed to perform a recovery process. For example, the Metro Mirror requirement to enable the secondary copy for access differentiates it from third-party mirroring software on the host, which aims to emulate a single, reliable disk regardless of what system is accessing it. Metro Mirror retains the property that there are two volumes in existence, but suppresses one while the copy is being maintained. Using a secondary copy demands a conscious policy decision by the administrator, that a failover is required, and the tasks to be performed on the host involved in establishing operation on the secondary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.

6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions
Table 6-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a single VDisk.
Table 6-7 VDisk valid combination. FlashCopy FlashCopy Source FlashCopy Target Metro Mirror or Global Mirror Primary Supported Not supported Metro Mirror or Global Mirror Secondary Supported Not supported

6.5.14 Metro Mirror configuration limits


Table 6-8 lists the Metro Mirror configuration limits.
Table 6-8 Metro Mirror configuration limits Parameter Number of Metro Mirror consistency groups per cluster Number of Metro Mirror relationships per cluster Number of Metro Mirror relationships per consistency group Total VDisk size per I/O group Value 256 8192 8192

There is a per I/O group limit of 1024 TB on the quantity of Primary and Secondary VDisk address space which may participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512MB of bitmap space for the IO Group and allow no FlashCopy bitmap space.

300

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

6.6 Metro Mirror commands


For comprehensive details about Metro Mirror Commands, refer to the IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, SC26-7903 The command set for Metro Mirror contains two broad groups: Commands to create, delete, and manipulate relationships and consistency groups Commands to cause state changes Where a configuration command affects more than one cluster, Metro Mirror performs the work to coordinate configuration activity between the clusters. Some configuration commands can only be performed when the clusters are connected and fail with no effect when they are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Metro Mirror when the clusters become connected once more. For any given command, with one exception, a single cluster actually receives the command from the administrator. This is significant for defining the context for a CreateRelationShip mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case, the cluster receiving the command is called the local cluster. The exception mentioned previously is the command that sets clusters into a Metro Mirror partnership. The mkpartnership command must be issued to both the local and remote clusters. The commands here are described as an abstract command set. These are implemented as: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks

6.6.1 Listing available SVC cluster partners


To create an SVC cluster partnership, use the command svcinfo lsclustercandidate.

svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This is a prerequisite for creating Metro Mirror relationships.

6.6.2 Creating SVC cluster partnership


To create an SVC cluster partnership, use the command svctask mkpartnership.

svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Metro Mirror partnership, you must issue this command to both clusters. This step is a prerequisite to creating Metro Mirror relationships between VDisks on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the

Chapter 6. Advanced Copy Services

301

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

bandwidth defaults to 50 MBps. The bandwidth should be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth impact on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy for the SAN Volume Controller will be attempted. The background copy bandwidth can affect foreground I/O latency in one of three ways: The following results can occur if the background copy bandwidth is set too high for the Metro Mirror intercluster link capacity: The background copy I/Os can back up on the Metro Mirror intercluster link. There is a delay in the synchronous secondary writes of foreground I/Os. The foreground I/O latency will increase as perceived by applications. If the background copy bandwidth is set too high for the storage at the primary site, and background copy read I/Os overload the primary storage and delay foreground I/Os. If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the secondary overload the secondary storage and again delay the synchronous secondary writes of foreground I/Os. In order to set the background copy bandwidth optimally, make sure that you consider all three resources (the primary storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. This provisioning can be done by calculation (as above) or alternatively by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable, and then backing off to allow for peaks in workload and some safety margin.

svctask chpartnership
In case it is needed to change the bandwidth available for background copy in an SVC cluster partnership, the command svctask chpartnership can be used to specify the new bandwidth.

6.6.3 Creating a Metro Mirror consistency group


To create a Metro Mirror consistency group, use the command svctask mkrcconsistgrp.

svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror consistency group. The Metro Mirror consistency group name must be unique across all consistency groups known to the clusters owning this consistency group. If the consistency group involves two clusters, the clusters must be in communication throughout the creation process. The new consistency group does not contain any relationships and will be in the empty state. Metro Mirror relationships can be added to the group either upon creation or afterwards using the svctask chrelationship command.

6.6.4 Creating a Metro Mirror relationship


To create a Metro Mirror relationship, use the command svctask mkrcrelationship.

302

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship. This relationship persists until it is deleted. The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if both VDisks are in the same cluster, they must both be in the same I/O group. The master and auxiliary VDisk cannot be in an existing relationship, and cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Metro Mirror relationship, it can be added to an already existing consistency group, or be a stand-alone Metro Mirror relationship if no consistency group is specified. To check whether the master or auxiliary VDisks comply with the prerequisites to participate in a Metro Mirror relationship, use the command svcinfo lsrcrelationshipcandidate.

svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available VDisks that are eligible for a Metro Mirror relationship. When issuing the command, you can specify the master VDisk name and auxiliary cluster to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the command is issued with no flags, all VDisks that are not disallowed by some other configuration state, such as being a FlashCopy target, are listed.

6.6.5 Changing a Metro Mirror relationship


To modify the properties of a Metro Mirror relationship, use the command svctask chrcrelationship.

svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Metro Mirror relationship: Change the name of a Metro Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Note: When adding a Metro Mirror relationship to a consistency group that is not empty, the relationship must have the same state and copy direction as the group in order to be added to it.

6.6.6 Changing a Metro Mirror consistency group


To change the name of a Metro Mirror consistency group, use the command svctask chrcconsistgrp.

svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror consistency group.

Chapter 6. Advanced Copy Services

303

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.6.7 Starting a Metro Mirror relationship


To start a stand-alone Metro Mirror relationship, use the command svctask startrcrelationship.

svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro Mirror relationship. When issuing the command, the copy direction can be set, if it is undefined, and optionally mark the secondary VDisk of the relationship as clean. The command fails it if it is used to attempt to start a relationship that is part of a consistency group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error. If the resumption of the copy process leads to a period when the relationship is not consistent, then you must specify the -force flag when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original primary of the relationship. The use of the -force flag here is a reminder that the data on the secondary will become inconsistent while resynchronization (background copying) occurs, and therefore the date is not usable for disaster recovery purposes before the background copy has completed. In the idling state, you must specify the primary VDisk to indicate the copy direction. In other connected states, you can provide the primary argument, but it must match the existing setting.

6.6.8 Stopping a Metro Mirror relationship


To stop a stand-alone Metro Mirror relationship, use the command svctask stoprcrelationship.

svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent secondary VDisk by specifying the -access flag. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a consistency group. You can issue this command to stop a relationship that is copying from primary to secondary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue a svctask startrcrelationship command. Write activity is no longer copied from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized state, this command causes a consistency freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state) then the -access parameter can be used with the stoprcrelationship command to enable write access to the secondary VDisk.

304

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

6.6.9 Starting a Metro Mirror consistency group


To start a Metro Mirror consistency group, use the command svctask startrcconsistgrp.

svctask startrcconsistgrp
The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group. This command can only be issued to a consistency group that is connected. For a consistency group that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error.

6.6.10 Stopping a Metro Mirror consistency group


To stop a Metro Mirror consistency group, use the command svctask stoprcconsistgrp.

svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro Mirror consistency group. It can also be used to enable write access to the secondary VDisks in the group if the group is in a consistent state. If the consistency group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the primary to the secondary VDisks belonging to the relationships in the group. For a consistency group in the ConsistentSynchronized state, this command causes a consistency freeze. When a consistency group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), then the -access argument can be used with the svctask stoprcconsistgrp command to enable write access to the secondary VDisks within that group.

6.6.11 Deleting a Metro Mirror relationship


To delete a Metro Mirror relationship, use the command svctask rmrcrelationship.

svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two VDisks. It does not affect the VDisks themselves. If the relationship is disconnected at the time that the command is issued, then the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still wish to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. If you delete an inconsistent relationship, the secondary VDisk becomes accessible even though it is still inconsistent. This is the one case in which Metro Mirror does not inhibit access to inconsistent data.

Chapter 6. Advanced Copy Services

305

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.6.12 Deleting a Metro Mirror consistency group


To delete a Metro Mirror consistency group, use the command svctask rmrcconsistgrp.

svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group. This command deletes the specified consistency group. You can issue this command for any existing consistency group. If the consistency group is disconnected at the time that the command is issued, then the consistency group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the consistency group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the consistency group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the consistency group is not empty, then the relationships within it are removed from the consistency group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the consistency group.

6.6.13 Reversing a Metro Mirror relationship


To reverse a Metro Mirror relationship, use the command svctask switchrcrelationship.

svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of primary and secondary VDisk when a stand-alone relationship is in a consistent state. When issuing the command, the desired primary is specified.

6.6.14 Reversing a Metro Mirror consistency group


To reverse a Metro Mirror consistency group, use the command svctask switchrcconsistgrp.

svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of primary and secondary VDisk when a consistency group is in a consistent state. This change is applied to all the relationships in the consistency group, and when issuing the command, the desired primary is specified.

6.6.15 Background copy


Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a Status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between the nodes that are performing background copy for one of the eligible relationships. This allocation is made without regard for the number of disks that the node is responsible for. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy.

306

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

For intracluster relationships, each node is assigned a static quota of 25 MBps.

Chapter 6. Advanced Copy Services

307

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.7 Global Mirror overview


In the topics that follow, we describe the Global Mirror (GM) copy service, which is an asynchronous remote copy service. It provides and maintains a consistent mirrored copy of a source VDisk to a target VDisk. Data is written from the source VDisk to the target VDisk asynchronously. This method was previously known as Asynchronous Peer-to-Peer Remote Copy. Global Mirror works by defining a Global Mirror relationship between two VDisks of equal size and maintains the data consistency in an asynchronous manner. Therefore, when a host writes to a source VDisk, the data is copied from the source VDisk cache to the target VDisk cache. At the initiation of that data copy, confirmation of I/O completion is transmitted back to the host. Note: The minimum firmware requirement for GM functionality is V4.1.1. Any cluster or partner cluster not running this minimum level will not have GM functionality available. Even if you have a Global Mirror relationship running on a down-level partner cluster and you only wish to use intracluster GM, the functionality will not be available to you. SVC provides both intracluster and intercluster Global Mirror.

6.7.1 Intracluster Global Mirror


Although Global Mirror is available for intracluster, it has no functional value for production use. Intracluster Metro Mirror provides the same capability with less overhead. However, leaving this functionality in place simplifies testing and does allow for customer experimentation and testing (for example, to validate server failover on a single test cluster).

6.7.2 Intercluster Global Mirror


Intercluster Global Mirror operations require a pair of SVC clusters that are commonly separated by a number of moderately high bandwidth links. The two SVC clusters must be defined in an SVC cluster partnership to establish a fully functional Global Mirror relationship. Note: When a local and a remote fabric are connected together for Global Mirror purposes, the ISL hop count between a local node and a remote node should not exceed seven hops.

6.8 Remote copy techniques


Global Mirror is an asynchronous remote copy that is briefly explained below. To illustrate the differences between synchronous and asynchronous remote copy, synchronous remote copy is also explained below.

6.8.1 Asynchronous remote copy


Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, write operations are completed on the primary site and the write acknowledgement is sent to the host before it is received at the secondary site. An update of this write operation is sent to the secondary site at a later stage. This provides the capability of performing remote copy over distances exceeding the limitations of synchronous remote copy.

308

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over long distance links with higher latency, without requiring the hosts to wait for the full round-trip delay of the long distance link. Figure 6-23 shows that a write operation to the master VDisk is acknowledged back to the host issuing the write before it is mirrored to the cache for the auxiliary VDisk.

Figure 6-23 GM write sequence

The Global Mirror algorithms operate so as to maintain a consistent image at the secondary at all times. They achieve this by identifying sets of I/Os that are active concurrently at the primary, assigning an order to those sets, and applying these sets of IOs in the assigned order at the secondary. As a result Global Mirror maintains the features of Write Ordering and Read Stability described in this chapter. The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary cluster, and so is not subject to the latency of the long distance link. These two elements of the protocol ensure that the throughput of the total cluster can be grown by increasing cluster size, while maintaining consistency across a growing data set. In a failover scenario, where the secondary site needs to become the primary source of data, some updates might be missing at the secondary site. Therefore, any applications that will use this data must have some external mechanism for recovery the missing updates and reapplying them, for example, transaction log replay.

6.8.2 SVC Global Mirror features


SVC Global Mirror supports the following features: Asynchronous remote copy of VDisks dispersed over metropolitan scale distances is supported. SVC implements the Global Mirror relationship between a VDisk pair, with each VDisk in the pair being managed by an SVC cluster.

Chapter 6. Advanced Copy Services

309

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

SVC supports intracluster Global Mirror, where both VDisks belong to the same cluster (and IO Group). Although, as stated earlier, this functionality is better suited to Metro Mirror. SVC supports intercluster Global Mirror, where each VDisk belongs to their separate SVC cluster. A given SVC cluster can be configured for partnership with between one and three other clusters. Intercluster and intracluster Global Mirror can be used concurrently within a cluster for different relationships. SVC does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the SVC maintains a control link between the two clusters. This control link is used to control state and coordinate updates at either end. The control link is implemented on top of the same FC fabric connection as the SVC uses for Global Mirror I/O. SVC implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC maintains and polices a strong concept of consistency and makes this available to guide configuration activity. SVC implements flexible resynchronization support, enabling it to re-synchronize VDisk pairs that have suffered write I/O to both disks and to resynchronize only those regions that are known to have changed. Colliding writes: Prior to 4.3.1, the Global Mirror algorithm requires that only a single write be active on any given 512 bytes LBA of a virtual disk. If a further write is received from a host while the secondary write is still active, even though the primary write might have completed, the new host write will be delayed until the secondary write is complete. This restriction is needed in case a series of writes to the secondary have to be retried (reconstruction). Conceptually, the data for reconstruction comes from the primary VDisk. If more than one write is allowed to be applied to the primary for a given sector, then only the most recent write will get the correct data during reconstruction and if reconstruction were interrupted for any reason, the intermediate state of the secondary would not be consistent. Applications that deliver such write activity will not achieve the performance that Global Mirror is intended to support. A VDisk statistic is maintained of the frequency of these collisions. From 4.3.1 onwards, an attempt is made to allow multiple writes to a single location be outstanding in the Global Mirror algorithm. There is still a need for primary writes to be serialized, and the intermediate states of the primary data must be kept in a non-volatile journal while the writes are outstanding to maintain the correct write ordering during reconstruction. Reconstruction must never overwrite data on the secondary with an earlier version. The VDisk statistic monitoring colliding writes is now limited to those writes which are not affected by this change. Figure 6-24 on page 311 shows a colliding write sequence example.

310

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Figure 6-24 Colliding writes example

In Figure 6-24 you can see the following. (1) Original GM write in progress (2) Second write to same sector and in flight write logged to Journal file (3 & 4) 2nd Write to secondary cluster (5) Initial write completes An optional feature for Global Mirror permits a delay simulation to be applied on writes sent to secondary virtual disks. This allows testing to be performed that detects colliding writes and so can be used to test an application before full deployment of the feature. The feature can be enabled separately for each of intra-cluster or inter-cluster Global Mirror. The delay setting is specified using the command chcluster and viewed using lscluster. gm_intra_delay_simulation expresses the amount of time that intra-cluster secondary IOs are delayed. gm_inter_delay_simulation expresses the amount of time that inter-cluster secondary IOs are delayed. A value of zero disables the feature. SVC 5.1 introduces Multi-Cluster Mirroring (MCM). The rules for a GM MCM environment are the same as in an MM environment. For more detailed information see Chapter 6.5.4, Multi-Cluster-Mirroring (MCM) on page 281.

6.9 Global Mirror relationships


Global Mirror relationships are similar to FlashCopy mappings. They can be stand-alone or combined in consistency groups. The start and stop commands can be issued either against the stand-alone relationship or the consistency group. Figure 6-25 illustrates the Global Mirror relationship.

Chapter 6. Advanced Copy Services

311

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 6-25 Global Mirror relationship

A Global Mirror relationship is composed of two VDisks that are equal in size. The master VDisk and the auxiliary VDisk can be in the same I/O group, within the same SVC cluster (intracluster Global Mirror), or can be on separate SVC clusters that are defined as SVC partners (intercluster Global Mirror). Note: Be aware that: A VDisk can only be part of one Global Mirror relationship at a time. A VDisk that is a FlashCopy target cannot be part of a Global Mirror relationship.

6.9.1 Global Mirror relationship between primary and secondary VDisk


When creating a Global Mirror relationship, the master VDisk is initially assigned as the primary, and the auxiliary VDisk as the secondary. This implies that the initial copy direction is mirroring the master VDisk to the auxiliary VDisk. After the initial synchronization is complete, the copy direction can be changed, if appropriate. In most common applications of Global Mirror, the master VDisk contains the production copy of the data, and is used by the host application, while the auxiliary VDisk contains the mirrored copy of the data and is used for failover in disaster recovery scenarios. The terms master and auxiliary help support this use. If Global Mirror is applied differently, the terms master and auxiliary VDisks need to be interpreted appropriately.

6.9.2 Importance of write ordering


Many applications that uses block storage have a requirement to survive failures such as loss of power or a software crash, and not to lose data that existed prior to the failure. Since many applications need to perform large numbers of update operations in parallel to that storage, maintaining write ordering is key to ensuring the correct operation of applications following a disruption. An application, for example, databases, that is performing a large set of updates is usually designed with the concept of dependent writes. These are the writes where it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine the applications algorithms and can lead to problems, such as detected or undetected data corruption.

6.9.3 Dependent writes that span multiple VDisks


The following scenario illustrates a simple example of a sequence of dependent writes, and in particular what can happen if they span multiple VDisks. Consider the following typical sequence of writes for a database update transaction:

312

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

1. A write is executed to update the database log, indicating that a database update is to be performed. 2. A second write is executed to update the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The write sequence is illustrated in Figure 6-26.

Figure 6-26 Dependent writes for a database

The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next. Note: All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored to a point beyond the last full, offline backup, logs are required to roll the data forward to the point of failure. But imagine if the database log and the database itself are on different VDisks and a Global Mirror relationship is stopped during this update. In this case, you need to consider the possibility that the Global Mirror relationship for the VDisk with the database file is stopped slightly before the VDisk containing the database log. If this were the case, then it could be possible that the secondary VDisks see writes (1) and (3) but not (2). Then, if the database was restarted using the data available from the secondary disks, the database log would indicate that the transaction had completed successfully, when this is not the case. In this scenario, the integrity of the database is in question.

Chapter 6. Advanced Copy Services

313

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.9.4 Global Mirror consistency groups


Global Mirror consistency groups address the issue of dependent writes across VDisks, where the objective is to preserve data consistency across multiple Global Mirrored VDisks. Consistency groups ensure a consistent data set, because applications have relational data spanning across multiple VDisks. A Global Mirror consistency group can contain an arbitrary number of relationships up to the maximum number of Global Mirror relationships supported by the SVC Cluster. Global Mirror commands can be issued to a Global Mirror consistency group, and thereby simultaneously for all Global Mirror relationships defined within that consistency group, or to a single Metro Mirror relationship, if not part of a Global Mirror consistency group. For example, when issuing a Global Mirror start command to the consistency group, all of the Global Mirror relationships in the consistency group are started at the same time. In Figure 6-27, the concept of Global Mirror consistency groups is illustrated. Since the GM_Relationship 1 and GM_Relationship 2 are part of the consistency group, they can be handled as one entity, while the stand-alone GM_Relationship 3 is handled separately.

Figure 6-27 Global Mirror consistency group

Certain uses of Global Mirror require manipulation of more than one relationship. Global Mirror consistency groups can provide the ability to group relationships so that they are manipulated in unison. Global Mirror relationships within a consistency group can be in any form: Global Mirror relationships can be part of a consistency group, or be stand-alone and therefore handled as single instances.

314

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

A consistency group can contain zero or more relationships. An empty consistency group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All the relationships in a consistency group must have matching master and auxiliary SVC clusters. Although it is possible to use consistency groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to some undesired side effects. The rules behind a consistency group mean that certain configuration commands are prohibited. Where this would not be the case was if the relationship was not part of a consistency group. For example, consider the case of two applications that are completely independent, yet they are placed into a single consistency group. In the event of an error there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Global Mirror rejects attempts to enable access to secondary VDisks of either application. If one application finishes its background copy much more quickly than the other, Global Mirror still refuses to grant access to its secondary, even though it is safe in this case, because Global Mirror policy is to refuse access to the entire consistency group if any part of it is inconsistent. Stand-alone relationships and consistency groups share a common configuration and state model. All the relationships in a non-empty consistency group have the same state as the consistency group.

6.10 How Global Mirror works


This section discusses how Global Mirror works.

6.10.1 Intercluster communication and zoning


All intercluster communication is performed through the SAN. Prior to creating intercluster Global Mirror relationships, you must create a partnership between the two clusters. SVC node ports on each SVC cluster must be able to access each other to facilitate the partnership creation. Therefore, a zone in each fabric must be defined for intercluster communication; see Chapter 3, Planning and configuration on page 63 for more information.

6.10.2 SVC Cluster partnership


When the SVC cluster partnership has been defined on both clusters, further communication facilities between the nodes in each of the cluster are established. This comprises: A single control channel, which is used to exchange and coordinate configuration information I/O channels between each of the nodes in the clusters These channels are maintained and updated as nodes appear and disappear and as links fail, and are repaired to maintain operation where possible. If communication between the SVC clusters is interrupted or lost, an error is logged (and consequently Global Mirror relationships will stop).

Chapter 6. Advanced Copy Services

315

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

To handle error conditions, the SVC can be configured to raise SNMP traps, e-mail or if a TPC-R is in place, the TPC-R could control the links status and alert using SNMP traps or e-mail too.

6.10.3 Maintenance of the intercluster link


All SVC nodes maintain a database of the other devices that are visible on the fabric. This is updated as devices appear and disappear. Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement the clustering and functional protocols of SVC. Nodes that are in different clusters do not exchange messages after the initial discovery is complete unless they have been configured together to perform Global Mirror. The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes. If the designated node should fail (or all its logins to the remote cluster fail), then a new node is chosen to carry control traffic. This causes I/O to pause, but does not cause relationships to become Consistent Stopped.

6.10.4 Distribution of work amongst nodes


Global Mirror VDisks should have their preferred nodes evenly distributed between the nodes of the clusters. Each VDisk within an I/O group has a preferred node property that can be used to balance the I/O load between nodes in that group. This property is also used by Global Mirror to route I/O between clusters. Figure 6-28 shows the best relationship between VDisks and their preferred nodes in order to get the best performance.

Figure 6-28 Preferred VDisk GM relationship

316

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

6.10.5 Background Copy Performance


Background copy resources for inter-cluster remote copy will be available within two nodes of an I/O Group to perform background copy at a maximum of 200 MB/s (each data read and data written) total. This is subject to there being sufficient RAID controller bandwidth or other potential bottleneck (such as the inter-cluster fabric) and also subject to no contention from host I/O for SVC bandwidth resources. Background copy I/O will be scheduled so as to avoid bursts of activity that might have an adverse effect on system behavior. A whole grain of tracks on one Virtual Disk will be processed at around the same time but not as a single I/O. Double buffering will be used to try to take advantage of sequential performance within a grain. However, the next grain within the Virtual Disk may not be scheduled for some time. Multiple grains might be copied simultaneously, enough to satisfy the requested rate, unless the resources available cannot sustain the requested rate. Background copy will proceed from low LBA to high LBA in sequence. This is to avoid convoying with FlashCopy which will operate in the opposite direction. It is expected that it will not convoy with sequential applications since it will tend to vary disk more often.

6.10.6 Space-efficient background copy


Prior to SVC 4.3.1, if a primary VDisk was space-efficient the background copy process would cause the secondary to become fully allocated. When both primary and secondary clusters are running SVC 4.3.1 or higher, Metro Mirror and Global Mirror relationships can preserve the space-efficiency of the primary. Conceptually, the background copy process detects an unallocated region of the primary and sends a special zero buffer to the secondary. If the secondary VDisk is space-efficient, and the region is unallocated, the special buffer will prevent a write (and therefore an allocation). If the secondary VDisk is not space-efficient, or the region in question is an allocated region of a space-efficient VDisk, a buffer of real zeros will be synthesized on the secondary and written as normal. If the secondary cluster is running code prior to SVC 4.3.1, this will be detected by the primary cluster and a buffer of real zeros will be transmitted and written on the secondary. The background copy rate will control the rate at which the virtual capacity is being copied.

6.11 Global Mirror process


There are several steps in the Global Mirror process: 1. An SVC cluster partnership is created between two SVC clusters (for intercluster Global Mirror). 2. A Global Mirror relationship is created between two VDisks of the same size. 3. To manage multiple Global Mirror relationships as one entity, the relationships can be made part of a Global Mirror consistency group. This is to ensure data consistency across multiple Global Mirror relationships, or simply for ease of management. 4. The Global Mirror relationship is started, and when the background copy has completed, the relationship is consistent and synchronized. 5. Once synchronized, the secondary VDisk holds a copy of the production data at the primary that can be used for disaster recovery.

Chapter 6. Advanced Copy Services

317

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6. To access the auxiliary VDisk, the Global Mirror relationship must be stopped with the access option enabled, before write I/O is submitted to the secondary. 7. The remote host server is mapped to the auxiliary VDisk and the disk is available for I/O.

6.11.1 Methods of synchronization


This section describes three methods that can be used to establish a relationship.

Full synchronization after creation


This is the default method. It is the simplest and it requires no administrative activity apart from issuing the necessary commands. However, in some environments, the bandwidth available will make this method unsuitable. The sequence for a single relationship is as follows: A new relationship is created (mkrcrelationship is issued) without specifying the -sync flag. A new relationship is started (startrcrelationship is issued) without the -clean flag.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary virtual disks contain identical data before creating the relationship. There are two ways in which this might be done: Both disks are created with the security delete (-fmtdisk) feature so as to make all data zero. A complete tape image (or other method of moving data) is copied from one disk to the other. In either technique, no write I/O must take place either on Master or Auxiliary before the relationship is established. Then, the administrator must ensure that: A new relationship is created (mkrcrelationship is issued) with the -sync flag. A new relationship is started (startrcrelationship is issued) without the -clean flag. If these steps are not performed correctly, the relationship will be reported as being consistent, when it is not. This is likely to make any secondary disk useless. This method has an advantage over full synchronization: It does not require all the data to be copied over a constrained link. However, if the data needs to be copied, the master and auxiliary disks cannot be used until the copy is complete, which might be unacceptable.

Quick synchronization after creation


In this method, the administrator must still copy data from master to auxiliary, but it can be used without stopping the application at the master. The administrator must ensure that: A new relationship is created (mkrcrelationship is issued) with the -sync flag. A new relationship is stopped (mkrcrelationship is issued) with the -access flag. A tape image (or other method of transferring data) is used to copy the entire master disk to the auxiliary disk. Once the copy is complete, the administrator must ensure that a new relationship is started (startrcrelationship is issued) with the -clean flag.

318

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

With this technique, only the data that has changed since the relationship was created, including all regions that were incorrect in the tape image, are copied from master and auxiliary. As with Synchronized before creation on page 318, the copy step must be performed correctly, or else the auxiliary will be useless, although the copy will report it as being synchronized.

Global Mirror states and events


In this section, we explain the different states of a Global Mirror relationship, and the series of events that modify these states. Figure 6-29, shows an overview of the states that apply to a Global Mirror relationship in the connected state.

Figure 6-29 Global Mirror state diagram

When creating the Global Mirror relationship, you can specify if the auxiliary VDisk is already in sync with the master VDisk, and the background copy process is then skipped. This is especially useful when creating Global Mirror relationships for VDisks that have been created with the format option. The following steps explain the Global Mirror state diagram: 1. Step 1 is done as follows: a. The Global Mirror relationship is created with the -sync option and the Global Mirror relationship enters the Consistent stopped state. b. The Global Mirror relationship is created without specifying that the master and auxiliary VDisks are in sync, and the Global Mirror relationship enters the Inconsistent stopped state.

Chapter 6. Advanced Copy Services

319

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

2. Step 2 is done as follows: a. When starting a Global Mirror relationship in the Consistent stopped state, it enters the Consistent synchronized state. This implies that no updates (write I/O) have been performed on the primary VDisk while in the Consistent stopped state, otherwise the -force option must be specified, and the Global Mirror relationship then enters the Inconsistent copying state, while the background copy is started. b. When starting a Global Mirror relationship in the Inconsistent stopped state, it enters the Inconsistent copying state, while the background copy is started. 3. Step 3 is done as follows: a. When the background copy completes, the Global Mirror relationship transits from the Inconsistent copying state to the Consistent synchronized state. 4. Step 4 is done as follows: a. When stopping a Global Mirror relationship in the Consistent synchronized state, where specifying the -access option enables write I/O on the secondary VDisk, the Global Mirror relationship enters the Idling state. b. To enable write I/O on the secondary VDisk, when the Global Mirror relationship is in the Consistent stopped state, issue the command svctask stoprcrelationship, specifying the -access option, and the Global Mirror relationship enters the Idling state. 5. Step 5 is done as follows: a. When starting a Global Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary VDisk) while in the Idling state, the Global Mirror relationship enters the Consistent synchronized state. b. In case write I/O has been performed to either the master or the auxiliary VDisk, then the -force option must be specified, and the Global Mirror relationship then enters the Inconsistent copying state, while the background copy is started. Stop or Error: When a Global Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied. For example, this means that Global Mirror relationships in the Consistent synchronized state enter the Consistent stopped state and Global Mirror relationships in the Inconsistent copying state enter the Inconsistent stopped state. In a case where the connection is broken between the SVC clusters in a partnership, then all (intercluster) Global Mirror relationships enter a disconnected state. For further information, refer to Connected versus disconnected on page 320. Note: Stand-alone relationships and consistency groups share a common configuration and state model. This means that all the Global Mirror relationships in a non-empty consistency group have the same state as the consistency group.

6.11.2 State overview


The SVC defined concepts of state are key to understanding the configuration concepts and are therefore explained in more detail below.

Connected versus disconnected


This distinction can arise when a Global Mirror relationship is created with the two virtual disks in different clusters.

320

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Under certain error scenarios, communications between the two clusters might be lost. For example, power might fail causing one complete cluster to disappear. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other. When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected. In this scenario, each cluster is left with half the relationship and has only a portion of the information that was available to it before. Some limited configuration activity is possible, and is a subset of what was possible before. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship, and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected once again. Global Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state it was in when it became disconnected or it can enter a different connected state. Relationships that are configured between virtual disks in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships or consistency groups that contains relationships can be described as being consistent or inconsistent. The consistent or inconsistent property describes the state of the data on the secondary in relation to that on the primary VDisk. It can be considered a property of the secondary VDisk itself. A secondary is described as consistent if it contains data that could have been read by a host system from the primary if power had failed at some imaginary point in time while I/O was in progress and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the primary up to the recovery point: The secondary VDisk contains the data from all writes to the primary, for which the host had received good completion and that data had not been overwritten by a subsequent write (before the recovery point). The writes, for which the host did not receive good completion (that is, the host received bad completion or no completion at all) and the host subsequently performed a read from the primary of that data. If that read returned good completion and no later write was sent (before the recovery point), then the secondary contains the same data as that returned by the read from the primary. From the point of view of an application, consistency means that a secondary VDisk contains the same data as the primary VDisk at the recovery point (the time at which the imaginary power failure occurred). If an application is designed to cope with an unexpected power failure, this guarantee of consistency means that the application will be able to use the secondary and begin operation just as though it had been restarted after the hypothetical power failure. Again, the application is dependent on the key properties of consistency:

Chapter 6. Advanced Copy Services

321

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Write ordering Read stability for correct operation at the secondary If a relationship, or a set of relationships, is inconsistent and an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an error code. The application might fail to detect that the data is corrupt and return erroneous data. The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Global Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a consistency group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems and therefore consistency must operate across all those disks. When deciding how to use consistency groups, the administrator must consider the scope of an applications data, taking into account all the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, then either of the following actions might occur: All the data accessed by the group of systems must be placed into a single consistency group. The systems must be recovered independently (each within its own consistency group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the primary and secondary virtual disks are only different in regions where writes are outstanding from the host. Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at some point in time in the past. Write I/O might have continued to a primary and not have been copied to the secondary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the secondary. When communication is lost for an extended period of time, Global Mirror tracks the changes that happen at the primary, but not the order of such changes, or the details of such changes (write data). When communication is restored, it is impossible to make the secondary synchronized without sending write data to the secondary out-of-order, and therefore losing consistency. Two policies can be used to cope with this: Make a point-in-time copy of the consistent secondary before allowing the secondary to become inconsistent. In the event of a disaster, before consistency is achieved again, the point-in-time copy target provides a consistent, though out-of-date, image. Accept the loss of consistency, and loss of useful secondary, while making it synchronized.

322

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

6.11.3 Detailed states


The following sections detail the states that are portrayed to the user, for either consistency groups or relationships. It also details the extra information available in each state. The different major states are constructed to provide guidance as to the configuration commands that are available.

InconsistentStopped
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either. A copy process needs to be started to make the secondary consistent. This state is entered when the relationship or consistency group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or consistency group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transits to IdlingDisconnected.

InconsistentCopying
This is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or consistency group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or consistency group. In this state, a background copy process runs that copies data from the primary to the secondary virtual disk. In the absence of errors, an InconsistentCopying relationship is active, and the Copy Progress increases until the copy process completes. In some error situations, the copy progress might freeze or even regress. A persistent error or Stop command places the relationship or consistency group into the InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a consistency group, the relationship or consistency group transits to the ConsistentSynchronized state. If the relationship or consistency group becomes disconnected, then the secondary side transits to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

ConsistentStopped
This is a connected state. In this state, the secondary contains a consistent image, but it might be out-of-date with respect to the primary. This state can arise when a relationship was in Consistent Synchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE.

Chapter 6. Advanced Copy Services

323

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Normally, following an I/O error, subsequent write activity cause updates to the primary and the secondary is no longer synchronized (set to FALSE). In this case, to re-establish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this, and the relationship or consistency group transits to InconsistentCopying. Do this only after all outstanding errors are repaired. In the unusual case where the primary and secondary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or consistency group to ConsistentSynchronized and reverses the roles of the primary and secondary. If the relationship or consistency group becomes disconnected, then the secondary side transits to ConsistentDisconnected. The primary side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or consistency group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.

ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O. The secondary VDisk is accessible for read-only I/O. Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks. Either good completion must be received for both writes, the write must be failed to the host, or a state must transit out of ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the primary and secondary roles. A start command is accepted, but has no effect. If the relationship or consistency group becomes disconnected, the same transitions are made as for ConsistentStopped.

Idling
This is a connected state. Both master and auxiliary disks are operating in the primary role. Consequently, both are accessible for write I/O. In this state, the relationship or consistency group accepts a start command. Global Mirror maintains a record of regions on each disk that received write I/O while Idling. This is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either VDisk in any relationship has received write I/O. This is indicated by the synchronized status. If the start command leads to loss of consistency, then a -force parameter must be specified. Following a start command, the relationship or consistency group transits to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is such a loss.

324

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Also, while in this state, the relationship or consistency group accepts a -clean option on the start command. If the relationship or consistency group becomes disconnected, then both sides change their state to IdlingDisconnected.

IdlingDisconnected
This is a disconnected state. The VDisk or disks in this half of the relationship or consistency group are all in the primary role and accept read or write I/O. The main priority in this state is to recover the link and make the relationship or consistency group connected once more. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transits to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or consistency group, which depends on: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, then the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transits from TRUE to FALSE) and the relationship was not already stopped (either through user stop or a persistent error), then an error log is raised to notify this. This error log is the same as that raised when the same situation arises when ConsistentSynchronized.

InconsistentDisconnected
This is a disconnected state. The virtual disks in this half of the relationship or consistency group are all in the secondary role and do not accept read or write I/O. No configuration activity except for deletes is permitted until the relationship becomes connected again. When the relationship or consistency group becomes connected again, the relationship becomes InconsistentCopying automatically unless either: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or consistency group becomes InconsistentStopped.

ConsistentDisconnected
This is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the secondary side of a relationship becomes disconnected. In this state, the relationship or consistency group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or consistency group was known to be consistent. This corresponds to the time of the last successful heartbeat to the other cluster.
Chapter 6. Advanced Copy Services

325

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

A stop command with the -access flag set to TRUE transits the relationship or consistency group to the IdlingDisconnected state. This allows write I/O to be performed to the secondary VDisk and is used as part of a disaster recovery scenario. When the relationship or consistency group becomes connected again, the relationship or consistency group becomes ConsistentSynchronized only if this does not lead to a loss of Consistency. This is the case provided: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the primary while disconnected. Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to consistency groups. It is the state of a consistency group that has no relationships and no other state information to show. It is entered when a consistency group is first created. It is exited when the first relationship is added to the consistency group, at which point the state of the relationship becomes the state of the consistency group.

6.11.4 Practical use of Global Mirror


To use Global Mirror, a relationship must be defined between two VDisks. When creating the Global Mirror relationship, one VDisk is defined as the master, and the other as the auxiliary. The relationship between the two copies is asymmetric. When the Global Mirror relationship is created, the master VDisk is initially considered the primary copy (often referred to as the source), and the auxiliary VDisk is considered the secondary copy (often referred to as the target). The master VDisk is the production VDisk, and updates to this copy are real time mirrored to the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was created are destroyed. Note: The copy direction for a Global Mirror relationship can be switched so the auxiliary VDisk becomes the primary and the master VDisk becomes the secondary. While the Global Mirror relationship is active, the secondary copy (VDisk) is not accessible for host application write I/O at any time. The SVC allows read-only access to the secondary VDisk when it contains a consistent image. This is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimal delay if required. For example, many operating systems need to read Logical Block Address (LBA) 0 (zero) to configure a logical unit. Although read access is allowed at the secondary in practice, the data on the secondary volumes cannot be read by a host. The reason for this is that most operating systems write a dirty bit to the file system when it is mounted. Because this write operation is not allowed on the secondary volume, the volume cannot be mounted. This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads performed at the secondary and later write I/Os performed at the primary. To enable access to the secondary VDisk for host operations, the Global Mirror relationship must be stopped by specifying the -access parameter. 326
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

While access to the secondary VDisk for host operations is enabled, the host must be instructed to mount the VDisk and other related tasks, before the application can be started or instructed to perform a recovery process. Using a secondary copy demands a conscious policy decision by the administrator that a failover is required, and the tasks to be performed on the host involved in establishing operation on the secondary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.

6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions
Table 6-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a VDisk.
Table 6-9 VDisk valid combination. FlashCopy FlashCopy Source FlashCopy Target Metro Mirror or Global Mirror Primary Supported Not supported Metro Mirror or Global Mirror Secondary Supported Not supported

6.11.6 Global Mirror configuration limits


Table 6-10 lists the Global Mirror configuration limits.
Table 6-10 Global Mirror configuration limits Parameter Number of Metro Mirror consistency groups per cluster Number of Metro Mirror relationships per cluster Number of Metro Mirror relationships per consistency group Total VDisk size per I/O group Value 256 8192 8192

There is a per I/O group limit of 1024 TB on the quantity of Primary and Secondary VDisk address space which may participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512MB of bitmap space for the IO Group and allow no FlashCopy bitmap space.

6.12 Global Mirror commands


Here we summarize some of the most important Global Mirror commands. For complete details about all the Global Mirror commands, see IBM System Storage SAN Volume Controller: Command-Line Interface User's Guide, SC26-7903. The command set for Global Mirror contains two broad groups:
Chapter 6. Advanced Copy Services

327

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Commands to create, delete, and manipulate relationships and consistency groups Commands that cause state changes Where a configuration command affects more than one cluster, Global Mirror performs the work to coordinate configuration activity between the clusters. Some configuration commands can only be performed when the clusters are connected and fail with no effect when they are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Global Mirror when the clusters become connected once more. For any given command, with one exception, a single cluster actually receives the command from the administrator. This is significant for defining the context for a CreateRelationShip (mkrcrelationship) or CreateConsistencyGroup (mkrcconsistgrp) command, in which case, the cluster receiving the command is called the local cluster. This exception, as mentioned previously, is the command that sets clusters into a Global Mirror partnership. The mkpartnership command must be issued to both the local and to the remote cluster. The commands are described here as an abstract command set. These are implemented as: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks

6.12.1 Listing the available SVC cluster partners


To create an SVC cluster partnership, we use the command svcinfo lsclustercandidate.

svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This is a prerequisite for creating Global Mirror relationships. To display the characteristics of the cluster, use the command svcinfo lscluster, specifying the name of the cluster.

svctask chcluster
There are three parameters for Global Mirror in the command output: -gmlinktolerance link_tolerance Specifies the maximum period of time that the system will tolerate delay before stopping GM relationships. Specify values between 60 and 86400 seconds in increments of 10 seconds. The default value is 300. Do not change this value except under direction of IBM support. -gminterdelaysimulation link_tolerance Specifies the number of milliseconds that I/O activity (intercluster copying to a secondary VDisk) is delayed. This permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intercluster Global Mirror relationship separately. -gmintradelaysimulation link_tolerance

328

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

Specifies the number of milliseconds that I/O activity (intracluster copying to a secondary VDisk) is delayed. This permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intracluster Global Mirror relationship separately. Using svctask chcluster to adjust these values should be done as follows: svctask chcluster -gmlinktolerance 300 You can view all the above parameter values with the svcinfo lscluster <clustername> command.

gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note. If the poor response extends past the specified tolerance, a 1920 error is logged and one or more Global Mirror relationships are automatically stopped. This protects the application hosts at the primary site. During normal operation, application hosts see a minimal impact to response times because the Global Mirror feature uses asynchronous replication. However, if Global Mirror operations experience degraded response times from the secondary cluster for an extended period of time, I/O operations begin to queue at the primary cluster. This results in an extended response time to application hosts. In this situation, the gmlinktolerance feature stops Global Mirror relationships and the application hosts response time returns to normal. After a 1920 error has occurred, the Global Mirror auxiliary VDisks are no longer in the consistent_synchronized state until you fix the cause of the error and restart your Global Mirror relationships. For this reason, ensure that you monitor the cluster to track when this occurs. You can disable the gmlinktolerance feature by setting the gmlinktolerance value to0 (zero). However, the gmlinktolerance cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature in the following circumstances: During SAN maintenance windows where degraded performance is expected from SAN components and application hosts can withstand extended response times from Global Mirror VDisks. During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the Global Mirror relationships. For example, if you are testing using an I/O generator which is configured to stress the backend storage, the gmlinktolerance feature might detect the high latency and stop the Global Mirror relationships. Disabling gmlinktolerance prevents this at the risk of exposing the test host to extended response times. We suggest using a script to periodically monitor GM status. Example 6-2 shows an example of a script in ksh to check GM status.
Example 6-2 Script example

[AIX1@root] /usr/GMC > cat checkSVCgm #!/bin/sh # # Description # # GM_STATUS GM Status variable # HOSTsvcNAME SVC cluster ipaddress # PARA_TEST Consistent syncronized variable
Chapter 6. Advanced Copy Services

329

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

# PARA_TESTSTOPIN Stop inconsistent variable # PARA_TESTSTOP Consistent stopped variable # IDCONS Consistency Group ID variable # variable definition HOSTsvcNAME="128.153.3.237" IDCONS=255 PARA_TEST="consistent_synchronized" PARA_TESTSTOP="consistent_stopped" PARA_TESTSTOPIN="inconsistent_stopped" FLOG="/usr/GMC/log/gmtest.log" VAR=0 # Start Programm if [[ $1 == "" ]] then CICLI="true" fi while $CICLI do GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG if [[ $GM_STATUS = $PARA_TEST ]] then sleep 600 else sleep 600 GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]] then ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS TESTEX=`echo $?` echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG fi GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP ]] then echo "`date` Global Mirror restarted <$GM_STATUS>" else echo "`date` ERROR Global Mirro Failed <$GM_STATUS>" fi sleep 600 fi ((VAR+=1)) done PARA_TESTSTOP="consistent_stopped" The idea behind the script in Example 6-2 on page 329 is: Check Global Mirror status every 600 seconds. If the status is Consistent_Syncronized wait another 600 seconds and test again If the status is Consistent_Stopped or Inconsistent_Stopped, wait another 600 seconds and then try to restart GM. If the status is one of these then probably we have a 1920 error 330
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

scenario, and this means we could have a performance problem. Waiting 600 seconds before restarting Global Mirror could give the SVC enough time to deliver the high workload requested by the server. As Global Mirror has been stopped for 10 minutes (600 seconds), the secondary copy is now out of date by this amount of time and needs to be re-synchronized. Note: The script described in Example 6-2 on page 329 is supplied as-is. A 1920 error indicates that one or more of the SAN components is unable to provide the performance that is required by the application hosts. This can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or unexpected host I/O workload). If you are experiencing 1920 errors, we suggest that you install a SAN performance analysis tool, such as the IBM Tivoli Storage Productivity Center, and make sure that it is correctly configured and monitoring statistics to look for problems, and to try to prevent them.

6.12.2 Creating an SVC cluster partnership


To create an SVC cluster partnership, use the command svctask mkpartnership.

svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Global Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Global Mirror partnership, you must issue this command on both clusters. This step is a prerequisite for creating Global Mirror relationships between VDisks on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth should be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth impact on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy will be attempted for Global Mirror. The background copy bandwidth can affect foreground I/O latency in one of three ways: The following result can occur if the background copy bandwidth is set too high compared to the Global Mirror intercluster link capacity: The background copy I/Os can back up on the Global Mirror intercluster link. There is a delay in the synchronous secondary writes of foreground I/Os. The foreground I/O latency will increase as perceived by applications. If the background copy bandwidth is set too high for the storage at the primary site, background copy read I/Os overload the primary storage and delay foreground I/Os. If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the secondary overload the secondary storage and again delay the synchronous secondary writes of foreground I/Os. In order to set the background copy bandwidth optimally, make sure that you consider all three resources (the primary storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. This provisioning can be done by
Chapter 6. Advanced Copy Services

331

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

calculation as above or alternatively by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable and then backing off to accommodate peaks in workload and some additional safety margin.

svctask chpartnership
To change the bandwidth available for background copy in an SVC cluster partnership, the command svctask chpartnership can be used to specify the new bandwidth.

6.12.3 Creating a Global Mirror consistency group


To create a Global Mirror consistency group, use the command svctask mkrcconsistgrp.

svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new, empty Global Mirror consistency group. The Global Mirror consistency group name must be unique across all consistency groups known to the clusters owning this consistency group. If the consistency group involves two clusters, the clusters must be in communication throughout the creation process. The new consistency group does not contain any relationships and will be in the empty state. Global Mirror relationships can be added to the group, either upon creation or afterwards, using the svctask chrelationship command.

6.12.4 Creating a Global Mirror relationship


To create a Global Mirror relationship, use the command svctask mkrcrelationship. Note: If you do not use the -global optional parameter, a Metro Mirror relationship will be made instead of a Global Mirror relationship.

svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Global Mirror relationship. This relationship persists until it is deleted. The auxiliary virtual disk must be equal in size to the master virtual disk or the command will fail, and if both VDisks are in the same cluster, they must both be in the same I/O group. The master and auxiliary VDisk cannot be in an existing relationship, and they cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Global Mirror relationship, it can be added to an already existing consistency group, or be a stand-alone Global Mirror relationship if no consistency group is specified. To check whether the master or auxiliary VDisks comply with the prerequisites to participate in a Global Mirror relationship, use the command svcinfo lsrcrelationshipcandidate, as shown in svcinfo lsrcrelationshipcandidate on page 332.

svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list the available VDisks eligible to form a Global Mirror relationship.

332

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

When issuing the command, you can specify the master VDisk name and auxiliary cluster to list candidates that comply with the prerequisites to create a Global Mirror relationship. If the command is issued with no parameters, all VDisks that are not disallowed by some other configuration state, such as being a FlashCopy target, are listed.

6.12.5 Changing a Global Mirror relationship


To modify the properties of a Global Mirror relationship, use the command svctask chrcrelationship.

svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Global Mirror relationship: Change the name of a Global Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Note: When adding a Global Mirror relationship to a consistency group that is not empty, the relationship must have the same state and copy direction as the group in order to be added to it.

6.12.6 Changing a Global Mirror consistency group


To change the name of a Global Mirror consistency group, we use the command svctask chrcconsistgrp.

svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Global Mirror consistency group.

6.12.7 Starting a Global Mirror relationship


To start a stand-alone Global Mirror relationship, use the command svctask startrcrelationship.

svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Global Mirror relationship. When issuing the command, the copy direction can be set if undefined, and, optionally, mark the secondary VDisk of the relationship as clean. The command fails if it is used as an attempt to start a relationship that is already a part of a consistency group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error. If the resumption of the copy process leads to a period when the relationship is not consistent, then you must specify the -force parameter when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original primary of the relationship. The use of the -force parameter here is a reminder that the data on the secondary will become inconsistent while resynchronization
Chapter 6. Advanced Copy Services

333

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

(background copying) takes place, and therefore is not usable for disaster recovery purposes before the background copy has completed. In the idling state, you must specify the primary VDisk to indicate the copy direction. In other connected states, you can provide the primary argument, but it must match the existing setting.

6.12.8 Stopping a Global Mirror relationship


To stop a stand-alone Global Mirror relationship, use the command svctask stoprcrelationship.

svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent secondary VDisk by specifying the -access parameter. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a consistency group. You can issue this command to stop a relationship that is copying from primary to secondary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), then the -access parameter can be used with the svctask stoprcrelationship command to enable write access to the secondary virtual disk.

6.12.9 Starting a Global Mirror consistency group


To start a Global Mirror consistency group, use the command svctask startrcconsistgrp.

svctask startrcconsistgrp
The svctask startrcconsistgrp command is used to start a Global Mirror consistency group. This command can only be issued to a consistency group that is connected. For a consistency group that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by some I/O error.

6.12.10 Stopping a Global Mirror consistency group


To stop a Global Mirror consistency group, use the command svctask stoprcconsistgrp.

svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Global Mirror consistency group. It can also be used to enable write access to the secondary VDisks in the group if the group is in a consistent state. If the consistency group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer

334

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch06.Advanced Copy Services Angelo.fm

copied from the primary to the secondary VDisks, which belong to the relationships in the group. For a consistency group in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a consistency group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), then the -access parameter can be used with the svctask stoprcconsistgrp command to enable write access to the secondary VDisks within that group.

6.12.11 Deleting a Global Mirror relationship


To delete a Global Mirror relationship, use the command svctask rmrcrelationship.

svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two virtual disks. It does not affect the virtual disks themselves. If the relationship is disconnected at the time that the command is issued, then the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still wish to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. A relationship cannot be deleted if it is part of a consistency group. You must first remove the relationship from the consistency group. If you delete an inconsistent relationship, the secondary virtual disk becomes accessible even though it is still inconsistent. This is the one case in which Global Mirror does not inhibit access to inconsistent data.

6.12.12 Deleting a Global Mirror consistency group


To delete a Global Mirror consistency group, use the command svctask rmrcconsistgrp.

svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Global Mirror consistency group. This command deletes the specified consistency group. You can issue this command for any existing consistency group. If the consistency group is disconnected at the time that the command is issued, then the consistency group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the consistency group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the consistency group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the consistency group is not empty, then the relationships within it are removed from the consistency group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the consistency group.

Chapter 6. Advanced Copy Services

335

6423ch06.Advanced Copy Services Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

6.12.13 Reversing a Global Mirror relationship


To reverse a Global Mirror relationship, use the svctask switchrcrelationship command.

svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of primary and secondary VDisk when a stand-alone relationship is in a consistent state; when issuing the command, the desired primary needs to be specified.

6.12.14 Reversing a Global Mirror consistency group


To reverse a Global Mirror consistency group, use the command svctask switchrcconsistgrp.

svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of primary and secondary VDisk when a consistency group is in a consistent state. This change is applied to all the relationships in the consistency group, and when issuing the command, the desired primary needs to be specified.

336

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Chapter 7.

SVC operations using the CLI


In this chapter, operational management will be shown. We have divided this chapter into what we describe as normal operation, and advanced operation, and we will use the CLI to demonstrate these. We appreciate that their is a difference between using the CLI and the GUI, and although there are no rules that say one should be used over the other, we prefer to use the CLI in this chapter. We believe that these are the kind of operations you may want to script, and also this will make the documentation surrounding the scripts easier to create. Our assumption is that this is a fully (and correctly) functional SVC environment.

Copyright IBM Corp. 2009. All rights reserved.

337

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.1 Normal operations using CLI


In the topics that follow we describe those commands that we feel represent normal operational commands.

7.1.1 Command syntax and online help.


Two major command sets are available: The svcinfo command set allows us to query the various components within the IBM System Storage SAN Volume Controller (SVC) environment. The svctask command set allows us to make changes to the various components within the SVC. When the command syntax is shown, you will see some parameters in square brackets, for example, [parameter]. This indicates that the parameter is optional in most, if not all, instances. Anything that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: svcinfo -?: Shows a complete list of information commands. svctask -?: Shows a complete list of task commands. svcinfo commandname -?: Shows the syntax of information commands. svctask commandname -?: Shows the syntax of task commands. svcinfo commandname -filtervalue?: Shows what filters you can use to reduce output of the information commands. Note: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname -h. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue, as stated above. Tip: You can use the up and down keys on your keyboard to recall commands recently issued. Then, you can use the left and right, backspace, and delete keys to edit commands before you resubmit them.

7.2 Working with managed disks and disk controller systems


This section details the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment, and the tasks that you can perform at a disk controller level.

7.2.1 Viewing disk controller details


Use the svcinfo lscontroller command to display summary information about all available back-end storage systems. To display more detailed information about a specific controller, run the command again and append the controller name parameter, for example, controller id 0 as shown in Example 7-1.
Example 7-1 svctask lscontroller command

IBM_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0

338

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

id 0 controller_name ITSO_XIV_01 WWNN 50017380022C0000 mdisk_link_count 10 max_mdisk_link_count 10 degraded no vendor_id IBM product_id_low 2810XIVproduct_id_high LUN-0 product_revision 10.1 ctrl_s/n allow_quorum yes WWPN 50017380022C0170 path_count 2 max_path_count 4 WWPN 50017380022C0180 path_count 2 max_path_count 2 WWPN 50017380022C0190 path_count 4 max_path_count 6 WWPN 50017380022C0182 path_count 4 max_path_count 12 WWPN 50017380022C0192 path_count 4 max_path_count 6 WWPN 50017380022C0172 path_count 4 max_path_count 6

7.2.2 Renaming a controller


Use the svctask chcontroller command to change the name of a storage controller. To verify the change, run the svcinfo lscontroller command. Both of these commands are shown in Example 7-2.
Example 7-2 svctask chcontroller command

IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name DS4500 controller0 IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller -delim , id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high 0,DS4500,,IBM ,1742-900, 1,DS4700,,IBM ,1814 , FAStT This command renames the controller named controller0 to DS4500. Note: The chcontroller command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word controller, since this prefix is reserved for SVC assignment only.

Chapter 7. SVC operations using the CLI

339

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.2.3 Discovery status


Use the svcinfo lsdiscoverystatus command, as shown in Example 7-3, to determine if a discovery operation is in progress or not. The output of this command is the status of active or inactive.
Example 7-3 lsdiscoverystatus command

IBM_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatus status inactive

7.2.4 Discovering MDisks


In general, the cluster detects the MDisks automatically when they appear in the network. However, some Fibre Channel controllers do not send the required SCSI primitives that are necessary to automatically discover the new MDisks. If new storage has been attached and the cluster has not detected it, it might be necessary to run this command before the cluster will detect the new MDisks. Use the svctask detectmdisk command to scan for newly added MDisks (Example 7-4).
Example 7-4 svctask detectmdisk

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk To check whether any newly added MDisks were successfully detected, run the svcinfo lsmdisk command and look for new unmanaged MDisks. If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk subsystem, and that the zones are properly set up. Note: If you have assigned a large number of LUNs to your SVC, the discovery process could take a while. Check, several times, using the svcinfo lsmdisk command if all the MDisks you were expecting are present. When all of the disks allocated to the SVC are seen from the SVC cluster a good way to verify which MDisks are unmanaged and ready to be added to MDG follows. Perform the following steps to display managed disks (MDisks): 1. Enter the svcinfo lsmdiskcandidate command, as shown in Example 7-5. This displays all detected MDisks that are not currently part of a managed disk group (MDG).
Example 7-5 svcinfo lsmdiskcandidate command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskcandidate id 0 1 2 . .

Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo lsmdisk command, as shown in Example 7-6.

340

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Example 7-6 svcinfo lsmdisk command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID 0,mdisk0,online,unmanaged,,,36.0GB,0000000000000000,controller0,600a0b8000174431000000eb 47139cca00000000000000000000000000000000 1,mdisk1,online,unmanaged,,,36.0GB,0000000000000001,controller0,600a0b8000174431000000ef 47139e1c00000000000000000000000000000000 2,mdisk2,online,unmanaged,,,36.0GB,0000000000000002,controller0,600a0b8000174431000000f1 47139e7200000000000000000000000000000000 3,mdisk3,online,unmanaged,,,36.0GB,0000000000000003,controller0,600a0b8000174431000000e4 4713575400000000000000000000000000000000 4,mdisk4,online,unmanaged,,,36.0GB,0000000000000004,controller0,600a0b8000174431000000e6 4713576000000000000000000000000000000000 5,mdisk5,online,unmanaged,,,36.0GB,0000000000000000,controller1,600a0b800026b28200003ea3 4851577c00000000000000000000000000000000 6,mdisk6,online,unmanaged,,,36.0GB,0000000000000005,controller0,600a0b8000174431000000e7 47139cb600000000000000000000000000000000 7,mdisk7,online,unmanaged,,,36.0GB,0000000000000001,controller1,600a0b80002904de00004188 485157a400000000000000000000000000000000 8,mdisk8,online,unmanaged,,,36.0GB,0000000000000006,controller0,600a0b8000174431000000ea 47139cc400000000000000000000000000000000

From this output, you can see additional information about each MDisk (such as current status). For the purpose of our current task, we are only interested in the unmanaged disks because they are candidates for MDGs (all MDisks in our case). Tip: The -delim, parameter collapses output instead of wrapping text over multiple lines. 2. If not all the MDisks that you expected are visible, rescan the available Fibre Channel network by entering the svctask detectmdisk command, as in Example 7-7.
Example 7-7 svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask detectmdisk

3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem have been properly assigned to the SVC and that appropriate zoning is in place (for example, the SVC can see the disk subsystem). See Chapter 3, Planning and configuration on page 63 for details about how to set up your SAN fabric.

7.2.5 Viewing MDisk information


When viewing information about the MDisks (managed or unmanaged) we can use the svcinfo lsmdisk command to display overall summary information about all available managed disks. To display more detailed information about a specific MDisk, run the command again and append the -mdisk name parameter (for example, mdisk0). The overview command is svcinfo lsmdisk -delim shown in Example 7-8. The summary for individual MDisk is svcinfo lsmdisk (name/id of the MDisk you want the information from), shown in Example 7-9.
Example 7-8 svcinfo lsmdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,

Chapter 7. SVC operations using the CLI

341

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004 86a6600000ae94a89575900000000000000000000000000000000 1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000 000e134a895d6e00000000000000000000000000000000 2,mdisk2,online,managed,0,MDG_DS47,16.0GB,0000000000000002,controller0,600a0b80004 858a000000e144a895d9400000000000000000000000000000000 3,mdisk3,online,managed,0,MDG_DS47,16.0GB,0000000000000003,controller0,600a0b80004 858a000000e154a895db000000000000000000000000000000000 Example 7-9 shows a summary for a single MDisk.
Example 7-9 Usage of the command svcinfo lsmdisk (id).

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 2 id 2 name mdisk2 status online mode managed mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 16.0GB quorum_index 0 block_size 512 controller_name controller0 ctrl_type 4 ctrl_WWNN 200600A0B84858A0 controller_id 0 path_count 2 max_path_count 2 ctrl_LUN_# 0000000000000002 UID 600a0b80004858a000000e144a895d9400000000000000000000000000000000 preferred_WWPN 200600A0B84858A2 active_WWPN 200600A0B84858A2

7.2.6 Renaming an MDisk


Use the svctask chmdisk command to change the name of an MDisk. When using the command be aware that the new name comes first and then the id/name of the MDisk renamed. The command can be explained like this svcinfo chmdisk -name (new name) (current id/name). To verify the change, run the svcinfo lsmdisk command. Both of these commands are shown in Example 7-10.
Example 7-10 svctask chmdisk command

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdisk_6 mdisk6 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 6,mdisk_6,online,managed,0,MDG_DS45,36.0GB,0000000000000005,DS4500,600a0b800017443 1000000e747139cb600000000000000000000000000000000

342

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

This command renamed the MDisk named mdisk6 to mdisk_6. Note: The chmdisk command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word mdisk, since this prefix is reserved for SVC assignment only.

7.2.7 Including an MDisk


If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These errors can be from a hardware problem, a storage area network (SAN) problem, or the result of poorly planned maintenance. If it was a hardware fault, you should receive Simple Network Management Protocol (SNMP) alerts about the state of the disk subsystem (before the disk was excluded) and undertake preventative maintenance. If not, the hosts that were using VDisks, which used the excluded MDisk, now have I/O errors. By running the svcinfo lsmdisk command, you can see that mdisk9 is excluded in Example 7-11.
Example 7-11 svcinfo lsmdisk command: Excluded MDisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431 000000ea47139cc400000000000000000000000000000000 9,mdisk9,excluded,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b2 8200003ed6485157b600000000000000000000000000000000 After taking the necessary corrective action to repair the MDisk (for example, replace the failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing the svctask includemdisk command (Example 7-12) since the SVC cluster does not include the MDisk automatically.
Example 7-12 svctask includemdisk

IBM_2145:ITSO-CLS1:admin>svctask includemdisk mdisk9 Running the svcinfo lsmdisk command again should show mdisk9 online again, as shown in Example 7-13.
Example 7-13 svcinfo lsmdisk command: Verifying that MDisk is included

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431 000000ea47139cc400000000000000000000000000000000 9,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b282 00003ed6485157b600000000000000000000000000000000

7.2.8 Adding MDisks to an MDisk group


If you created an empty MDG or you simply assign additional MDisks to your already configured MDG, you can use the svctask addmdisk command to populate the MDG (Example 7-14).
Chapter 7. SVC operations using the CLI

343

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 7-14 svctask addmdisk command

IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk mdisk6 MDG_DS45 You can only add unmanaged MDisks to an MDG. This command adds MDisk mdisk6 to the MDG named MDG_DS45. Important: Do not do this if you want to create an image mode VDisk from the MDisk you are adding. As soon as you add an MDisk to an MDG, it becomes managed, and extent mapping is not necessarily one to one anymore.

7.2.9 Showing the MDisk group


Use the svcinfo lsmdisk command as before to display information about the managed disk group (MDG) to which an MDisk belongs, as shown in Example 7-15.
Example 7-15 svcinfo lsmdisk command

id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_ capacity,used_capacity,real_capacity,overallocation,warning 0,MDG_DS45,online,13,4,468.0GB,512,355.0GB,140.00GB,100.00GB,112.00GB,29,0 1,MDG_DS47,online,8,3,288.0GB,512,217.5GB,120.00GB,20.00GB,70.00GB,41,0

7.2.10 Showing MDisks in an MDisk group


Use the svcinfo lsmdisk -filtervalue command, as shown in Example 7-16, to see which MDisks are part of a specific MDG. This command shows all MDisks that are part of the MDG MDG2.
Example 7-16 svcinfo lsmdisk -filtervalue: mdisks in MDG

IBM_2145:ITSOSVC42A:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG2 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID 6:mdisk6:online:managed:2:MDG2:3.0GB:0000000000000006:DS4000:600a0b800017423300000 044465c0a2700000000000000000000000000000000 7:mdisk7:online:managed:2:MDG2:6.0GB:0000000000000007:DS4000:600a0b800017443100000 06f465bf93200000000000000000000000000000000 21:mdisk21:online:image:2:MDG2:2.0GB:0000000000000015:DS4000:600a0b800017443100000 0874664018600000000000000000000000000000000

7.2.11 Working with managed disk groups (MDG)


Before we can create any volumes on the SVC cluster we need to virtualize the allocated storage assigned to the SVC. When we have assigned volumes to the SVC so called managed disks we cant start using them until they are a member of an MDisk group, therefore one of the first thing to do in operations is to create an MDisk group where we can place our MDisks. This section will only cover the operation part using the MDisks and MDisk groups. It explains the tasks that we can perform at an MDG level.

344

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.2.12 Creating MDisk group


After successful logon to the CLI interface of the SVC the creation of the MDisk group is done as shown in the following steps: Using the svctask mkmdiskgrp command will create an MDisk group as shown in Example 7-17.
Example 7-17 svctask mkmdiskgrp

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 MDisk Group, id [0], successfully created This command creates an MDG called MDG_DS47. The extent size used within this group is 512 MB, which is the most commonly used extent size. We have not added any MDisks to the MDisk group yet, so it is an empty MDG. There is a way to add unmanaged MDisks and create the MDG in the same command. By using the command svctask mkmdiskgrp with the parameter -mdisk and entering the id or names of the MDisks will add the MDisk immediately after the MDG is created. So, prior to the creation of the MDG you enter the command svcinfo lsmdisk shown in Example 7-18 where we list all available MDisks seen by the SVC cluster.
Example 7-18 Listing available MDisks.

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 0,mdisk0,online,unmanaged,,,16.0GB,0000000000000000,controller0,600a0b8000486a6600 000ae94a89575900000000000000000000000000000000 1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000 000e134a895d6e00000000000000000000000000000000 2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004 858a000000e144a895d9400000000000000000000000000000000 3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004 858a000000e154a895db000000000000000000000000000000000 Using the same command as before (svctask mkmdiskgrp) and knowing the MDisk ids we are using, we can add multiple MDisks to the MDG at the same time. We will now add the unmanaged MDisks shown in Example 7-18 on page 345 to the MDG we created, shown in Example 7-19.
Example 7-19 Creating an MDiskgrp and adding available MDisks.

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 -mdisk 0:1 MDisk Group, id [0], successfully created This command creates an MDG called MDG_DS47. The extent size used within this group is 512 MB, and two MDisks (0 and 1) are added to the group.

Chapter 7. SVC operations using the CLI

345

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: The -name and -mdisk parameters are optional. If you do not enter a -name, the default is MDiskgrpX, where X is the ID sequence number assigned by the SVC internally. If you do not enter the -mdisk parameter, an empty MDG is created. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. It can be between one and 15 characters in length, but it cannot start with a number or the word mDiskgrp because this prefix is reserved for SVC assignment only. By running the svcinfo lsmdisk command, you should now see the MDisks as managed and part of the MDG_DS47, as shown in Example 7-20.
Example 7-20 svcinfo lsmdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004 86a6600000ae94a89575900000000000000000000000000000000 1,mdisk1,online,managed,0,MDG_DS47,16.0GB,0000000000000001,controller0,600a0b80004 858a000000e134a895d6e00000000000000000000000000000000 2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004 858a000000e144a895d9400000000000000000000000000000000 3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004 858a000000e154a895db000000000000000000000000000000000 You have now completed the tasks required to create an MDG.

7.2.13 Viewing MDisk group information


Use the svcinfo lsmdiskgrp command, as shown in Example 7-21, to display information about the MDGs defined in the SVC.
Example 7-21 svcinfo lsmdiskgrp command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim , id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_ capacity,used_capacity,real_capacity,overallocation,warning 0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0 1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0

7.2.14 Renaming an MDisk group


Use the svctask chmdiskgrp command to change the name of an MDG. To verify the change, run the svcinfo lsmdiskgrp command. Both of these commands are shown in Example 7-22.
Example 7-22 svctask chmdiskgrp command

IBM_2145:ITSO-CLS1:admin>svctask chmdiskgrp -name MDG_DS81 MDG_DS83 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim , id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_ capacity,used_capacity,real_capacity,overallocation,warning 0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0 1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0

346

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

2,MDG_DS81,online,0,0,0,512,0,0.00MB,0.00MB,0.00MB,0,85 This command renamed the MDG from MDG_DS83 to MDG_DS81. Note: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word mdiskgrp, since this prefix is reserved for SVC assignment only.

7.2.15 Deleting an MDisk group


Use the svctask rmmdiskgrp command to remove an MDG from the SVC cluster configuration (Example 7-23).
Example 7-23 svctask rmmdiskgrp

IBM_2145:ITSO-CLS1:admin>svctask rmmdiskgrp MDG_DS81 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim , id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_ capacity,used_capacity,real_capacity,overallocation,warning 0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0 1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0 This command removes MDG_DS81 from the SVC configuration. Note: If there are MDisks within the MDG, you must use the -force flag, for example: svctask rmmdiskgrp MDG_DS81 -force Ensure that you really want to use this flag, as it destroys all mapping information and data held on the VDisks, which cannot be recovered.

7.2.16 Removing MDisks from an MDisk group


Use the svctask rmmdisk command to remove an MDisk from an MDG (Example 7-24).
Example 7-24 svctask rmmdisk command

IBM_2145:ITSO-CLS1:admin>svctask rmmdisk -mdisk 6 -force MDG_DS45 This command removes the MDisk called mdisk6 from the MDG named MDG_DS45.The -force flag is set because there are VDisks using this MDG. Note: The removal only takes place if there is sufficient space to migrate the VDisk data to other extents on other MDisks that remain in the MDG. After you remove the MDisk group, it takes some time to change the mode from managed to unmanaged.

7.3 Working with hosts


This section explains the tasks that can be performed at a host level.

Chapter 7. SVC operations using the CLI

347

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

When we are creating a host in our SVC cluster we need to define the connection method. Starting with SVC 5.1 we can now define our host as iSCSI attached or Fibre Channel attached, and these connection methods are described in detail in Chapter 2, IBM System Storage SAN Volume Controller overview on page 7.

7.3.1 Creating a Fibre Channel attached host


We show how to create a host under different circumstances in the sections that follow.

Host is powered on and is connected and zoned to the SVC


When you will create your host on the SVC it is good practice to check if the host bus adapter (HBA) worldwide port names (WWPNs) of the server is visible to the SVC.By doing that you ensure that zoning is done and the correct WWPN will be used.To do this, issue the svcinfo lshbaportcandidate command, as shown in Example 7-25.
Example 7-25 svcinfo lshbaportcandidate command

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA After you know that the WWPNs that are displayed match your host (use host or SAN switch utilities to verify), use the svctask mkhost command to create your host. Note: If you do not provide the -name parameter, the SVC automatically generates the name hostX (where X is the ID sequence number assigned by the SVC internally). You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word host, since this prefix is reserved for SVC assignment only. The command to create a host is shown in Example 7-26.
Example 7-26 svctask mkhost

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B89C1CD:210000E08B054CAA Host, id [0], successfully created This command creates a host called Palau using WWPN 21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA Note: You can define from one up to eight ports per host or you can use the addport command, which we show in 7.3.5, Adding ports to a defined host on page 352.

Host is not powered on or not connected to the SAN


If you want to create a host on the SVC without seeing your target WWPN using the svcinfo lshbaportcandidate command, you need to add the -force flag to your mkhost command as shown in Example 7-27. This option is more open for human errors than if you choose the WWPN from a list, but it is usually used when many host definitions are created at the same time, such as through a script.

348

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create the host regardless if they are connected or not, as shown in Example 7-27.
Example 7-27 mkhost -force

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Guinea -hbawwpn 210000E08B89C1DC -force Host, id [4], successfully created This command forces the creation of a host called Guinea using WWPN 210000E08B89C1DC. Note: WWPNs are not case sensitive in the CLI. If you run the svcinfo lshost command again, you should now see your host Guinea under host id 4.

7.3.2 Creating an iSCSI attached host


We now have the opportunity to create host definitions to a host that is not connected to the SAN but has LAN access to our SVC nodes. Before we create the host definition we need to configure our SVC clusters to use the new iSCSI connection method, Further information on how to configure your nodes to use iSCSI is described in 7.7.4, iSCSI configuration on page 380. The iSCSI functionality allows the host to access volumes through the SVC without being attached to the SAN. Back-end storage and node to node communication still need the Fibre Channel network to communicate but the host does not necessarily need to be connected to the SAN. When we create a host that is going to use iSCSI as a communication method, iSCSI initiator software must be installed on the host to initiate the communication between the SVC and the host. This creates an IQN identifier that is needed before we create our host. Before we start we check our servers IQN address. We are running Windows Server 2008 and under Start --> Programs --> Administrative tools --> we select iSCSI initiator and as shown in Figure 7-1 on page 350 our IQN in the example is: iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com.

Chapter 7. SVC operations using the CLI

349

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 7-1 IQN from the iSCSI initiator tool

We create the host by issuing the command mkhost as shown in Example 7-28 and when the command completes successfully we display our newly created host. It is important to know that when the host is initially configured the default authentication method is set no authentication and no CHAP secret is set. To set a CHAP secret for authenticating the iSCSI host with the SAN Volume Controller cluster, use the svctask chhost command with the chapsecret parameter.
Example 7-28 mkhost command

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Baldur -iogrp 0 -iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com Host, id [4], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 0 state offline We have now created our host definition and we will now map a VDisk to our new iSCSI server as shown in Example 7-29. We have already created the VDisk as shown in 7.4.1, Creating a VDisk on page 354. In our scenario our VDisk has id 21 and the hostname is Baldur and now we map it to our iSCSI host.
Example 7-29 Mapping VDisk to iSCSI host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Baldur 21

350

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Virtual Disk to Host map, id [0], successfully created After the VDisk has been mapped to the host we display the host information again as shown in Example 7-30.
Example 7-30 svcinfo lshost

IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 1 state online Note: Fibre Channel and iSCSI hosts are handled in the same operational way once they have been created. If you need to display a CHAP secret for a already defined server you can do that with the command svcinfo lsiscsiauth.

7.3.3 Modifying a host


Use the svctask chhost command to change the name of a host. To verify the change, run the svcinfo lshost command. Both of these commands are shown in Example 7-31 on page 351.
Example 7-31 svctask chhost command

IBM_2145:ITSO-CLS1:admin>svctask chhost -name Angola Guinea IBM_2145:ITSO-CLS1:admin>svcinfo lshost id name port_count 0 Palau 2 1 Nile 2 2 Kanaga 2 3 Siam 2 4 Angola 1

iogrp_count 4 1 1 2 4

This command renamed the host from Guinea to Angola. Note: The chhost command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word host, since this prefix is reserved for SVC assignment only.

Note: If you are using HP-UX there is a -type option to use, see the IBM System Storage SAN Volume Controller Host Attachment Guide for more information on the hosts that require the -type parameter

Chapter 7. SVC operations using the CLI

351

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.3.4 Deleting a host


Use the svctask rmhost command to delete a host from the SVC configuration. If your host is still mapped to VDisks and you use the -force flag the host will be deleted and all mappings along with it. The VDisks will not be deleted, only the mappings to them. The command shown in Example 7-32 deletes the host called Angola from the SVC configuration.
Example 7-32 rmhost Angola

IBM_2145:ITSO-CLS1:admin>svctask rmhost Angola Note: If there are any VDisks assigned to the host, you must use the -force flag, for example: svctask rmhost -force Angola

7.3.5 Adding ports to a defined host


If you add an HBA or a NIC to a server that is already defined within the SVC, you can use the svctask addhostport command to add the new port definitions to your host configuration. If your host is currently connected through SAN with Fibre Channel and the WWPN is already zoned to the SVC cluster, you issue the svcinfo lshbaportcandidate command, as shown in Example 7-33 to compare with the information you have from the server administrator.
Example 7-33 svcinfo lshbaportcandidate

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B054CAA If the WWPN matches your information (use host or SAN switch utilities to verify), use the svctask addhostport command to add the port to the host. The command to add a host port is shown in Example 7-34.
Example 7-34 svctask addhostport

IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA Palau This command adds the WWPN of 210000E08B054CAA to the host Palau. Note: You can add multiple ports all at once by using the separator (:) between WWPNs, for example: svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau If the new HBA is not connected or zoned the svcinfo lshbaportcandidate command will not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the host regardless, as shown in Example 7-35.
Example 7-35 svctask addhostport

IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA -force Palau

352

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

This command forces the addition of the WWPN 210000E08B054CAA to the host called Palau. Note: WWPNs are one of the few things within the CLI that are not case sensitive. If you run the svcinfo lshost command again, you should see your host with an updated port count 2 in Example 7-36.
Example 7-36 svcinfo lshost command: port count

IBM_2145:ITSO-CLS1:admin>svcinfo lshost id name port_count 0 Palau 2 1 ITSO_W2008 1 2 Thor 3 3 Frigg 1 4 Baldur 1

iogrp_count 4 4 1 1 1

If your host is currently using iSCSI as a connection method you need to have the new iSCSI IQN id before you add the port. It is not possible to check for available candidates as we can with the Fibre Channel attached hosts. When you have acquired the additional iSCSI IQN you use the command as svctask addhostport as shown in Example 7-37.
Example 7-37 adding iSCSI port to a already configured host

IBM_2145:ITSO-CLS1:admin>svctask addhostport -iscsiname iqn.1991-05.com.microsoft:baldur 4

7.3.6 Deleting ports


If you make a mistake when adding, or if you remove an HBA from a server that is already defined within the SVC, you can use the svctask rmhostport command to remove WWPN definitions from an existing host. Before you remove the WWPN, be sure that it is the right one. To find this out, issue the svcinfo lshost command, as shown in Example 7-38.
Example 7-38 svcinfo lshost command

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B054CAA node_logged_in_count 2 state active WWPN 210000E08B89C1CD node_logged_in_count 2 state offline

Chapter 7. SVC operations using the CLI

353

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

When you know the WWPN or iSCSI IQN, use the svctask rmhostport command to delete a host port as shown in Example 7-39.
Example 7-39 svctask rmhostport

For removing WWPN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -hbawwpn 210000E08B89C1CD Palau and for removing iSCSI IQN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur Baldur This command removes the WWPN of 210000E08B89C1CD from host Palau and the iSCSI IQN iqn.1991-05.com.microsoft:baldur from the host Baldur. Note: You can remove multiple ports at a time by using the separator (:) between the port names, for example: svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola

7.4 Working with VDisks


This section details the various configuration and administration tasks that can be performed on the virtual disks (VDisks) within the SVC environment.

7.4.1 Creating a VDisk


The mkvdisk command creates sequential, striped, or image mode virtual disk objects. When they are mapped to a host object, these objects are seen as disk drives with which the host can perform I/O operations. When creating a virtual disk (VDisk), you must enter several parameters (some mandatory, some optional) at the CLI. The full command string and information can be found in the Command-Line Interface Users Guide, SC26-7903-05. Note: If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. When you are ready to create a VDisk, there are certain things need to be know before you start creating the VDisk. In which MDG is the VDisk going to have its extents From what I/O Group the VDisk will be accessed Size of the VDisk Name of the VDisk When you are ready to create your striped VDisk you use the svctask mkvdisk command (we cover sequential and image mode VDisks later). In Example 7-40 this command creates a 10 GB, striped VDisk with VDisk id0 within the MDG MDG_DS47 and assigns it to the I/O group iogrp_0.

354

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Example 7-40 svctask mkvdisk commands IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS47 -iogrp io_grp0 -size 10 -unit gb -name Tiger Virtual Disk, id [0], successfully created

To verify the results you can use the command svcinfo lsvdisk command, as shown in Example 7-41 on page 355.
Example 7-41 svcinfo lsvdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 0 id 0 name Tiger IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F1000000000000000 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00MB real_capacity 10.00MB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize

Chapter 7. SVC operations using the CLI

355

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

You have now completed the tasks required to create a VDisk.

7.4.2 VDisk information


Use the svcinfo lsvdisk command to display summary information about all VDisks defined within the SVC environment. To display more detailed information about a specific VDisk, run the command again and append the VDisk name parameter (for example, VDisk_D). Both of these commands are shown in Example 7-42.
Example 7-42 svcinfo lsvdisk command IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC _name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count 0,vdisk_A,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,,,60050768018301BF2800000000000008,0 ,1 1,vdisk_B,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF2800000000000001, 0,1 2,vdisk_C,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000000002,0 ,1 3,vdisk_D,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000000003,0 ,1 4,MM_DBLog_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,4,MMREL2,60050768018301BF280000 0000000004,0,1 5,MM_DB_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,5,MMREL1,60050768018301BF280000000 0000005,0,1 6,MM_App_Pri,1,io_grp1,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF280000000000000 6,0,1

7.4.3 Creating a Space efficient VDisk


In Example 7-43, we show an example of how to create a space-efficient VDisk (SEV). It is important to know that in addition to the normal parameters the following parameters need to be used. -rsize Makes the VDisk space-efficient; otherwise, the VDisk is fully allocated. -autoexpand Specifies that space-efficient copies automatically expand their real capacities by allocating new extents from their managed disk group. -grainsize Sets the grain size (KB) for a space-efficient VDisk.
Example 7-43 Usage of the command svctask mkvdisk

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS45 -iogrp 1 -vtype striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32 Virtual Disk, id [7], successfully created This command creates a space-efficient 10 GB VDisk. The VDisk belongs to the mdiskgrp with the name MDG_DS45 and is owned by the I/O group io_grp1. The real_capacity will automatically expand until the VDisk size of 10 GB is reached. The grainsize is set to 32 K which is default.

356

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Note: When using the -rsize parameter you have the following options; disk_size, disk_size_percentage and auto. Specify the disk_size_percentage value using an integer, or an integer immediately followed by the percent character (%). Specify the units for a disk_size integer using the -unit parameter; the default is MB. The -rsize value can be greater than, equal to, or less than the size of the VDisk. The auto option creates a VDisk copy that uses the entire size of the MDisk; if you specify the -rsize auto option, you must also specify the -vtype image option. An entry of 1 GB uses 1024 MB.

7.4.4 Creating a VDisk in image mode


This virtualization type allows image mode virtual disks to be created when a managed disk already has data on it, perhaps from a pre-virtualized subsystem. When an image mode virtual disk is created, it directly corresponds to the previously unmanaged, managed disk that it was created from. Therefore, with the exception of space-efficient image mode VDisks, virtual disk logical block address (LBA) x equals managed disk LBA x. You can use this command to bring a non-virtualized disk under the control of the cluster. After it is under the control of the cluster, you can migrate the virtual disk from the single managed disk. When it is migrated, the virtual disk is no longer an image mode virtual disk. You can add image mode VDisks to an already populated MDisk group with other types of VDisks, such as a striped or sequential. Note: An image mode VDisk must be at least 512 bytes (the capacity cannot be 0). That is, the minimum size that can be specified for an image mode VDisk must be the same as the MDisk group extent size that it is added to, with a minimum of 16 MB. You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The -fmtdisk parameter cannot be used to create an image mode VDisk. Note: If you create a mirrored VDisk from two image mode MDisks without specifying a -capacity value, the capacity of the resulting VDisk is the smaller of the two MDisks, and the remaining space on the larger MDisk is not accessible. If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. Use the svctask mkvdisk command to create an image mode VDisk as shown in Example 7-44.
Example 7-44 svctask mkvdisk (image mode)

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Image -iogrp 0 -mdisk mdisk20 -vtype image -name Image_Vdisk_A Virtual Disk, id [8], successfully created This command creates an image mode VDisk called Image_Vdisk_A using MDisk mdisk20. The VDisk belongs to the MDG MDG_Image and is owned by the I/O group io_grp0.

Chapter 7. SVC operations using the CLI

357

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

If we run the svcinfo lsmdisk command again, notice that mdisk20 now has a status of image, as shown in Example 7-45.
Example 7-45 svcinfo lsmdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b2 8200003f9f4851588700000000000000000000000000000000 20,mdisk20,online,image,2,MDG_Image,36.0GB,0000000000000007,DS4700,600a0b80002904d e00004282485158aa00000000000000000000000000000000

7.4.5 Adding a mirrored VDisk copy


You can create a mirrored copy of a VDisk. This keeps a VDisk accessible even when the MDisk on which it depends has become unavailable. You can create a copy of a VDisk either on different MDisk groups or by creating an image mode copy of the VDisk. Copies increase the availability of data; however, they are not separate objects. You can only create or change mirrored copies from the VDisk. In addition, you can use VDisk mirroring as an alternative method of migrating VDisks between MDisk groups. For example, if you have a non-mirrored VDisk in one MDisk group and want to migrate that VDisk to another MDisk group, you can add a new copy of the VDisk and specify the second MDisk group. After the copies are synchronized, you can delete the copy on the first MDisk group. The VDisk is migrated to the second MDisk group while remaining online during the migration. To create a mirrored copy of an VDisk, use the addvdiskcopy command. This adds a copy of the chosen VDisk to the MDG selected, which changes a non-mirrored VDisk into a mirrored VDisk. In the following scenario, we show how to create a VDisk copy mirror from one MDG to another MDG. As you can see in Example 7-46, the VDisk has a copy with copy_id 0.
Example 7-46 lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C id 2 name vdisk_C IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS47 capacity 45.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name

358

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

RC_id RC_name vdisk_UID 60050768018301BF2800000000000002 virtual_disk_throttling (MB) 20 preferred_node_id 3 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 12.00GB free_capacity 12.00GB overallocation 375 autoexpand off warning 23 grainsize 32 In Example 7-47, the VDisk copy mirror is being added using the svctask addvdiskcopy command.
Example 7-47 svctask addvdiskcopy

IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp MDG_DS45 -vtype striped -rsize 20 -autoexpand -grainsize 64 -unit gb vdisk_C Vdisk [2] copy [1] successfully created During the synchronization process, the status can be seen using the command svcinfo lsvdisksyncprogress. As shown in Example 7-48, the first time the status has been checked the synchronization progress was at 86%, and the estimated completion time was 19:16:54. The second time the command is performed, the progress status is at 100%, and the synchronization has completed.
Example 7-48 Synchronization

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C vdisk_id vdisk_name copy_id progress estimated_completion_time 2 vdisk_C 1 86 080710191654 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C vdisk_id vdisk_name copy_id progress estimated_completion_time

Chapter 7. SVC operations using the CLI

359

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

vdisk_C

100

As you can see in Example 7-49, the new VDisk copy mirror (copy_id 1) has been added and can be seen using the svcinfo lsvdisk command.
Example 7-49 svcinfo lsvidsk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C id 2 name vdisk_C IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id many mdisk_grp_name many capacity 45.0GB type many formatted no mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000002 virtual_disk_throttling (MB) 20 preferred_node_id 3 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 12.00GB free_capacity 12.00GB overallocation 375 autoexpand off warning 23 grainsize 32 copy_id 1 status online sync yes primary no 360
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.44MB real_capacity 20.02GB free_capacity 20.02GB overallocation 224 autoexpand on warning 80 grainsize 64 Notice that the VDisk copy mirror (copy_id 1) does not have the same values as the VDisk copy. While adding a VDisk copy mirror, you are able to define a mirror with different parameters than the VDisk copy. This means that you can define a space-efficient VDisk copy mirror for a non-space-efficient VDisk copy and vice-versa. This is one of the ways to migrate a non-space-efficient VDisk to a space-efficient VDisk Note: To change the parameters of a VDisk copy mirror, it must be deleted and redefined with the new values.

7.4.6 Splitting a VDisk Copy


The splitvdiskcopy command creates a new VDisk in the specified I/O Group from a copy of the specified VDisk. If the copy that you are splitting is not synchronized, you must use the -force parameter. The command fails if you are attempting to remove the only synchronized copy. To avoid this, wait for the copy to synchronize or split the un-synchronized copy from the VDisk by using the -force parameter. You can run the command when either VDisk copy is offline. Example 7-50 shows the command svctask splitvdiskcopy, which is used to split a VDisk copy. It creates a new vdisk_N from the copy of vdisk_B.
Example 7-50 Split VDisk

IBM_2145:ITSO-CLS1:admin>svctask splitvdiskcopy -copy 1 -iogrp 0 -name vdisk_N vdisk_B Virtual Disk, id [2], successfully created As you can see in Example 7-51, the new VDisk, vdisk_N, has been created as an independent VDisk.
Example 7-51 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_N id 2 name vdisk_N IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 100.0GB

Chapter 7. SVC operations using the CLI

361

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF280000000000002F throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 84.75MB real_capacity 20.10GB free_capacity 20.01GB overallocation 497 autoexpand on warning 80 grainsize 64 The VDisk copy of VDisk vdisk_B has now lost its mirror. Therefore, a new VDisk has been created.

7.4.7 Modifying a VDisk


Executing the svctask chvdisk command will modify a single property of a Virtual Disk. Only one property can be modified at a time. So, to change the name and modify the I/O Group would require two invocations of the command. A new name, or label, can be specified. The new name can be used subsequently to reference the Virtual Disk. The I/O Group with which this Virtual Disk is associated can be changed. Note that this requires a flush of the cache within the nodes in the current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host level before performing this operation.

362

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Note: If the VDisk has a mapping to any hosts, it is not possible to move the VDisk to an I/O group that does not include any of those hosts. This operation will fail if there is not enough space to allocate bitmaps for a mirrored VDisk in the target IO group. If the -force parameter is used and the cluster is unable to destage all write data from the cache, the contents of the VDisk are corrupted by the loss of the cached data. If the -force parameter is used to move a VDisk that has out-of-sync copies, a full re-synchronization is required.

7.4.8 I/O governing


You can set a limit on the amount of I/O transactions that is accepted for a virtual disk. It is set in terms of I/Os per second or MB per second. By default, no I/O governing rate is set when a virtual disk is created. The choice between I/O and MB as the I/O governing throttle should be based on the disk access profile of the application. Database applications generally issue large amounts of I/O, but only transfer a relatively small amount of data. In this case, setting an I/O governing throttle based on MBs per second does not achieve much. It is better to use an I/O as a second throttle. At the other extreme, a streaming video application generally issues a small amount of I/O, but transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle based on I/Os per second does not achieve much, so it is better to use an MB per second throttle. Note: An I/O governing rate of 0 (displayed as throttling in the CLI output of svcinfo lsvdisk command) does not mean that zero I/O per second (or MBs per second) can be achieved. It means that no throttle is set. An example of the chvdisk command is shown in Example 7-52.
Example 7-52 svctask chvdisk (rate/warning SEV)

IBM_2145:ITSO-CLS1:admin>svctask chvdisk -rate 20 -unitmb vdisk_C IBM_2145:ITSO-CLS1:admin>svctask chvdisk -warning 85% vdisk7 Note: The chvdisk command specifies the new name first. The name can consist of letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, the dash, or the word vdisk, because this prefix is reserved for SVC assignment only. The first command changes the VDisk throttling of vdisk7 to 20 MBps, while the second command changes the SEV warning to 85%. If you want to verify the changes, issue the svcinfo lsvdisk command, as shown in Example 7-53.
Example 7-53 svcinfo lsvdisk command: verifying throttling

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk7


Chapter 7. SVC operations using the CLI

363

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

id 7 name vdisk7 IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 10.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF280000000000000A virtual_disk_throttling (MB) 20 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 5.02GB free_capacity 5.02GB overallocation 199 autoexpand on warning 85 grainsize 32

7.4.9 Deleting a VDisk


When executing this command on an existing managed mode VDisk, any data that remained on it will be lost. The extents that made up this VDisk will be returned to the pool of free extents available in the Managed Disk Group. If any Remote Copy, FlashCopy, or host mappings still exist for this virtual disk, then the delete will fail unless the -force flag is specified. This flag ensures the deletion of the VDisk and any VDisk to host mappings and/or copy mappings.

364

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

If the VDisk is currently the subject of a migrate to image mode, then the delete will fail unless the -force flag is specified. This flag will halt the migration and then delete the VDisk. If the command succeeds (without the -force flag) for an image mode disk, then the underlying back-end controller logical unit will be consistent with the data that a host could previously have read from the Image Mode Virtual Disk; that is, all fast write data will have been flushed to the underlying LUN. If the -force flag is used, then this guarantee does not hold. If there is any un-destaged data in the fast write cache for this VDisk, then the deletion of the VDisk will fail unless the -force flag is specified. Now any un-destaged data in the fast write cache will be deleted. Use the svctask rmvdisk command to delete a VDisk from your SVC configuration, as shown in Example 7-54.
Example 7-54 svctask rmvdisk

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk vdisk_A This command deletes VDisk vdisk_A from the SVC configuration. If the VDisk is assigned to a host, you need to use the -force flag to delete the VDisk (Example 7-55).
Example 7-55 svctask rmvdisk (force)

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk -force vdisk_A

7.4.10 Expanding a VDisk


Expanding a VDisk presents a larger capacity disk to your operating system. Although this can be easily done using the SVC, you must ensure your operating system(s) supports expansion before using this function. Assuming your operating system(s) supports it, you can use the svctask expandvdisksize command to increase the capacity of a given VDisk. A sample of this command is shown in Example 7-56.
Example 7-56 svctask expandvdisksize

IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb vdisk_C This command expands the vdisk_C, that was 35 GB before, by another 5 GB to give a total of 40 GB. To expand a space-efficient VDisk, you can use the -rsize option, as shown in Example 7-57. This command changes the real size of VDisk vdisk_B to real capacity of 55 GB. The capacity of the VDisk remains unchanged.
Example 7-57 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B id 1 name vdisk_B capacity 100.0GB mdisk_name fast_write_state empty used_capacity 0.41MB
Chapter 7. SVC operations using the CLI

365

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

real_capacity 50.00GB free_capacity 50.00GB overallocation 200 autoexpand off warning 40 grainsize 32 IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -rsize 5 -unit gb vdisk_B IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B id 1 name vdisk_B capacity 100.0GB mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 55.00GB free_capacity 55.00GB overallocation 181 autoexpand off warning 40 grainsize 32 Important: If a VDisk is expanded, its type will become striped even if it was previously sequential or in image mode. If there are not enough extents to expand your VDisk to the specified size, you will receive the following error message: CMMVC5860E Ic_failed_vg_insufficient_virtual_extents

7.4.11 Assigning a VDisk to a host


Use the svctask mkvdiskhostmap to map a VDisk to a host. When executed, this command will create a new mapping between the virtual disk and the host specified. This will essentially present this virtual disk to the host, as though the disk was directly attached to the Host. It is only after this command is executed that the host can perform I/O to the virtual disk. Optionally, a SCSI LUN ID can be assigned to the mapping. When the HBA on the host scans for devices attached to it, it will discover all Virtual Disks that are mapped to its Fibre Channel ports. When the devices are found, each one is allocated an identifier (SCSI LUN ID). For example, the first disk found will generally be SCSI LUN 1, and so on. You can control the order in which the HBA discovers Virtual Disks by assigning the SCSI LUN ID as required. If you do not specify a SCSI LUN ID, then the cluster will automatically assign the next available SCSI LUN ID, given any mappings that already exist with that host. Using the VDisk and host definition created in the previous sections, we will assign VDisks to hosts ready for their use. To do this, use the svctask mkvdiskhostmap command (see Example 7-58).
Example 7-58 svctask mkvdiskhostmap

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_B 366


SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Virtual Disk to Host map, id [2], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_C Virtual Disk to Host map, id [1], successfully created This command assigns vdisk_B and vdisk_C to host Tiger as shown in Example 7-59.
Example 7-59 svcinfo lshostvdiskmap -delim,

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 1,Tiger,2,1,vdisk_B,210000E08B892BCD,60050768018301BF2800000000000001 1,Tiger,1,2,vdisk_C,210000E08B892BCD,60050768018301BF2800000000000002 Note: The optional parameter -scsi scsi_num can help assign a specific LUN ID to a VDisk that is to be associated with a given host. The default (if nothing is specified) is to increment based on what is already assigned to the host. It is worth noting that some HBA device drivers will stop when they find a gap in the SCSI LUN IDs. For example: Virtual Disk 1 is mapped to Host 1 with SCSI LUN ID 1. Virtual Disk 2 is mapped to Host 1 with SCSI LUN ID 2. Virtual Disk 3 is mapped to Host 1 with SCSI LUN ID 4. When the device driver scans the HBA, it might stop after discovering Virtual Disks 1 and 2, because there is no SCSI LUN mapped with ID 3. Care should therefore be taken to ensure that the SCSI LUN ID allocation is contiguous. It is not possible to map a virtual disk to a host more than once at different LUN numbers (Example 7-60).
Example 7-60 svctask mkvdiskhostmap

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Siam vdisk_A Virtual Disk to Host map, id [0], successfully created This command maps the VDisk called vdisk_A to the host called Siam. You have now completed all the tasks required to assign a VDisk to an attached host.

7.4.12 Showing VDisks-to host-mapping.


Use the svcinfo lshostvdiskmap command to show which VDisks are assigned to a specific host (Example 7-61).
Example 7-61 svcinfo lshostvdiskmap

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,0,vdisk_A,210000E08B18FF8A,60050768018301BF280000000000000C From this command, you can see that the host Siam has only one VDisk called vdisk_A assigned. The SCSI LUN ID is also shown. This is the ID by which the Virtual Disk is being presented to the host. If no host is specified, all defined host to VDisk mappings will be returned.

Chapter 7. SVC operations using the CLI

367

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: Although the -delim, flag normally comes at the end of the command string, in this case you must specify this flag before the host name. Otherwise, it returns the following message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or incorrect argument sequence has been detected. Ensure that the input is as per the help.

7.4.13 Deleting a VDisk-to-host mapping.


When deleting a VDisk mapping you are not deleting the volume itself, only the connection from the host to the volume. If you mapped a VDisk to a host by mistake, or you simply want to reassign the volume to another host. Use the svctask rmvdiskhostmap command to unmap a VDisk from a host and is shown in (Example 7-62).
Example 7-62 svctask rmvdiskhostmap

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Tiger vdisk_D This command unmaps the VDisk called vdisk_D from the host Tiger.

7.4.14 Migrating a VDisk


From time to time, you might want to migrate VDisks from one set of MDisks to another set of MDisks to decommission an old disk subsystem, to have better balanced performance across your virtualized environment, or simply to migrate data into the SVC environment transparently using image mode. Further information on migration can be found in Chapter 9, Data migration on page 675. Important: After migration is started, it continues to completion unless it is stopped or suspended by an error condition or the VDisk being migrated is deleted. As you can see from the parameters in Example 7-63 on page 368, before you can migrate your VDisk, you must know the name of the VDisk you want to migrate and the name of the MDG to which you want to migrate. To find the name, simply run the svcinfo lsvdisk and svcinfo lsmdiskgrp commands. When you know these details, you can issue the migratevdisk command, as shown in Example 7-63.
Example 7-63 svctask migratevdisk

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -mdiskgrp MDG_DS47 -vdisk vdisk_C This command moves vdisk_C to MDG_DS47. Note: If insufficient extents are available within your target MDG, you receive an error message. Make sure the source and target MDisk group have the same extent size. The optional threads parameter allows you to assign a priority to the migration process. The default is 4, which is the highest priority setting. However, if you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1.

368

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

You can run the svcinfo lsmigrate command at any time to see the status of the migration process. This is shown in Example 7-64.
Example 7-64 svcinfo lsmigrate command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 12 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 16 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 Note: The progress is given as percent complete. If you get no more replies, then the process has finished.

7.4.15 Migrate a VDisk to an image mode VDisk


Migrating a VDisk to an image mode VDisk allows the SVC to be removed from the data path.This might be useful where the SVC is used as a data mover appliance. You can use the svctask migratetoimage command to do this. To migrate a VDisk to an image mode VDisk, the following rules apply: The destination MDisk must be greater than or equal to the size of the VDisk. The MDisk specified as the target must be in an unmanaged state. Regardless of the mode that the VDisk starts in, it is reported as managed mode during the migration. Both of the MDisks involved are reported as being image mode during the migration. If the migration is interrupted by a cluster recovery or by a cache problem, then the migration will resume after the recovery completes. An example of the command is shown in Example 7-65.
Example 7-65 svctask migratetoimage

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk vdisk_A -mdisk mdisk8 -mdiskgrp MDG_Image In this example, you migrate the data from vdisk_A onto mdisk8 and the MDisk must be put into the MDisk group MDG_Image.

7.4.16 Shrinking a VDisk


The shrinkvdisksize command reduces the capacity that is allocated to the particular virtual disk by the amount that you specify. You cannot shrink the real size of a space-efficient

Chapter 7. SVC operations using the CLI

369

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

volume below its used size. All capacities, including changes, must be in multiples of 512 bytes. An entire extent is reserved even if it is only partially used. The default capacity units are MB. The command can be used to shrink the physical capacity that is allocated to a particular VDisk by the specified amount. The command can also be used to shrink the virtual capacity of a space-efficient VDisk without altering the physical capacity assigned to the VDisk. For a non-space-efficient VDisk, use the -size parameter. For a space-efficient VDisk real capacity, use the -rsize parameter. For the space-efficient VDisk virtual capacity, use the -size parameter. When the virtual capacity of a space-efficient VDisk is changed, the warning threshold is automatically scaled to match. The new threshold is stored as a percentage. The cluster arbitrarily reduces the capacity of the VDisk by removing a partial, one or more extents from those allocated to the VDisk. You cannot control which extents are removed and so you cannot assume that it is unused space that is removed. Note: Image Mode Virtual Disks cannot be reduced in size. They must first be migrated to Managed Mode. To run the shrinkvdisksize command on a mirrored VDisk, all copies of the VDisk must be synchronized.

Attention: 1. If the virtual disk contains data, do not shrink the disk. 2. Some operating systems or file systems use what they consider to be the outer edge of the disk for performance reasons. This command can shrink FlashCopy target virtual disks to the same capacity as the source. 3. Before you shrink a VDisk, validate that the VDisk is not mapped to any host objects. If the VDisk is mapped, data is displayed. You can determine the exact capacity of the source or master VDisk by issuing the svcinfo lsvdisk -bytes vdiskname command. Shrink the VDisk by the required amount by issuing the svctask shrinkvdisksize -size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command. Assuming your operating system supports it, you can use the svctask shrinkvdisksize command to decrease the capacity of a given VDisk. An example of this command is shown in Example 7-66.
Example 7-66 svctask shrinkvdisksize

IBM_2145:ITSO-CLS1:admin>svctask shrinkvdisksize -size 44 -unit gb vdisk_A This command shrinks a volume called Vdisk_A from a previously total size of 80 GB, by 44 GB to a new total size of 36 GB.

7.4.17 Showing a VDisk on an MDisk


Use the svcinfo lsmdiskmember command to display information about the VDisks that use space on a specific MDisk, as shown in Example 7-67.

370

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Example 7-67 svcinfo lsmdiskmember command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskmember mdisk1 id copy_id 0 0 2 0 3 0 4 0 5 0 This command displays the list of all VDisk IDs that correspond to the VDisk copies that are using mdisk1. To correlate the IDs displayed in this output to VDisk names, we can run the svcinfo lsvdisk command, which we discuss in more detail in Chapter 7.4, Working with VDisks on page 354.

7.4.18 Showing VDisks using a MDisk group


Use the svcinfo lsvdisk -filtervalue command, as shown in Example 7-68, to see which VDisks are part of a specific MDG. This command shows all VDisks that are part of the MDG called MDG_DS47.
Example 7-68 svcinfo lsvdisk -filtervalue: vdisks in MDG

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG_DS47 -delim , id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam e,UID 5,mdisk5,online,managed,1,MDG_DS47,36.0GB,0000000000000000,DS4700,600a0b800026b282 00003ea34851577c00000000000000000000000000000000 7,mdisk7,online,managed,1,MDG_DS47,36.0GB,0000000000000001,DS4700,600a0b80002904de 00004188485157a400000000000000000000000000000000 9,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b282 00003ed6485157b600000000000000000000000000000000 12,mdisk12,online,managed,1,MDG_DS47,36.0GB,0000000000000003,DS4700,600a0b80002904 de000041ba485157d000000000000000000000000000000000 14,mdisk14,online,managed,1,MDG_DS47,36.0GB,0000000000000004,DS4700,600a0b800026b2 8200003f6c4851585200000000000000000000000000000000 18,mdisk18,online,managed,1,MDG_DS47,36.0GB,0000000000000005,DS4700,600a0b80002904 de000042504851586800000000000000000000000000000000 19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b2 8200003f9f4851588700000000000000000000000000000000 20,mdisk20,online,managed,1,MDG_DS47,36.0GB,0000000000000007,DS4700,600a0b80002904 de00004282485158aa00000000000000000000000000000000

7.4.19 Showing from what MDisks the VDisk has its extents
Use the svcinfo lsvdiskmember command, as shown in Example 7-69, to show which MDisks are used by a specific VDisk.
Example 7-69 svcinfo lsvdiskmember command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember vdisk_D id 0


Chapter 7. SVC operations using the CLI

371

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

1 2 3 4 6 10 11 13 15 16 17 If you want to know more about these MDisks, you can run the svcinfo lsmdisk command, as explained in 7.2, Working with managed disks and disk controller systems on page 338 (using the ID displayed above rather than the name).

7.4.20 Showing from what MDisk group a VDisk has its extents
Use the svcinfo lsvdisk command, as shown in Example 7-70, to show which MDG a specific VDisk belongs to.
Example 7-70 svcinfo lsvdisk command: MDG name

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_D id 3 name vdisk_D IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 80.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000003 throttling 0 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0

372

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 80.00GB real_capacity 80.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize If you want to know more about these MDGs, you can run the svcinfo lsmdiskgrp command, as explained in 7.2.11, Working with managed disk groups (MDG) on page 344.

7.4.21 Showing the host to which the VDisk is mapped to.


To show the hosts to which a specific VDisk has been assigned, run the svcinfo lsvdiskhostmap command, as shown in Example 7-71.
Example 7-71 svcinfo lsvdiskhostmap command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap -delim , vdisk_B id,name,SCSI_id,host_id,host_name,wwpn,vdisk_UID 1,vdisk_B,2,1,Nile,210000E08B892BCD,60050768018301BF2800000000000001 1,vdisk_B,2,1,Nile,210000E08B89B8C0,60050768018301BF2800000000000001 This command shows the host or hosts to which the VDisk vdisk_B was mapped. It is normal for you to see duplicated entries, as there are more paths between the cluster and the host. To be sure that the operating system on the host sees the disk only one time, you must install and configure a multipath software application such as the IBM Subsystem Driver (SDD). Note: Although the optional -delim, flag normally comes at the end of the command string, in this case you must specify this flag before the VDisk name. Otherwise, the command does not return any data.

7.4.22 Showing the VDisk to which the host is mapped to.


To show the VDisk to which a specific host has been assigned, run the svcinfo lshostvdiskmap command, as shown in Example 7-72.
Example 7-72 lshostvdiskmap command example

id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006 This command shows which VDisks are mapped to the host called Siam. Note: Although the optional -delim, flag normally comes at the end of the command string, in this case you must specify this flag before the VDisk name. Otherwise, the command does not return any data.
Chapter 7. SVC operations using the CLI

373

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.4.23 Tracing a VDisk from a host back to its physical disk


In many cases there is a need to verify exactly what physical disk is presented to the host, for example from what MDG a specific volume comes from. From the host side it is not possible for the server administrator to see from what physical disks the volumes are running on via the GUI. To do so you need to enter the command (listed in Example 7-73) from your multipath command prompt. 1. On your host, run the datapath query device command. You see a long disk serial number for each vpath device, as shown in Example 7-73.
Example 7-73 datapath query device

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0 1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000004 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0 1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0 1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 Note: In Example 7-73, the state of each path is OPEN. Sometimes you will find the state CLOSED, and this does not necessarily indicate any kind of problem, as it might be due to the stage of processing that the path is in. 2. Run the svcinfo lshostvdiskmap command to return a list of all assigned VDisks (Example 7-74).
Example 7-74 svcinfo lshostvdiskmap IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006

Look for the disk serial number that matches your datapath query device output. This host was defined in our SVC as Siam. 3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make up the specified VDisk (Example 7-75).
Example 7-75 svcinfo lsvdiskmember IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember MM_DBLog_Pri id

374

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

0 1 2 3 4 10 11 13 15 16 17

4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN number information, as shown in Example 7-76. The output displays the controller name and the controller LUN ID, which should be enough (provided you named your controller something unique, such as a serial number) to track back to a LUN within the disk subsystem (Example 7-76).
Example 7-76 svcinfo lsmdisk command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 3 id 3 name mdisk3 status online mode managed mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 36.0GB quorum_index block_size 512 controller_name DS4500 ctrl_type 4 ctrl_WWNN 200400A0B8174431 controller_id 0 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000003 UID 600a0b8000174431000000e44713575400000000000000000000000000000000 preferred_WWPN 200400A0B8174433 active_WWPN 200400A0B8174433

7.5 Scripting under the CLI for SVC task automation


Usage of scripting constructs works better for automation of regular operational jobs. You can use available shells to develop it. To run an SVC console where the operating system is Windows 2000 and higher, you can either purchase licensed shell emulation software or download Cygwin from: http://www.cygwin.com Scripting enhances the productivity of SVC administrators and integration of their storage virtualization environment. We show an example of scripting in Appendix A, Scripting on page 783. You can create your own customized scripts to automate a large number of tasks for completion at a variety of times and run them through the CLI.

Chapter 7. SVC operations using the CLI

375

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

We recommend that in large SAN environments, where scripting with svctask commands is used, that it is kept as simple as possible since fallback, documentation and verifying of successful script prior to execution is harder to accomplish.

376

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.6 SVC advanced operations using the CLI


In the topics that follow we describe those commands that we feel represent advanced operational commands.

7.6.1 Command syntax


Two major command sets are available: The svcinfo command set allows us to query the various components within the IBM System Storage SAN Volume Controller (SVC) environment. The svctask command set allows us to make changes to the various components within the SVC. When the command syntax is shown, you see some parameters in square brackets, for example, [parameter]. This indicates that the parameter is optional in most if not all instances. Anything that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: svcinfo -?: Shows a complete list of information commands. svctask -?: Shows a complete list of task commands. svcinfo commandname -?: Shows the syntax of information commands. svctask commandname -?: Shows the syntax of task commands. svcinfo commandname -filtervalue?: Shows what filters you can use to reduce output of the information commands. Note: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname -h. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue, as stated above. Tip: You can use the up and down keys on your keyboard to recall commands recently issued. Then, you can use the left and right, backspace, and delete keys to edit commands before you resubmit them.

7.6.2 Organizing on screen content


Sometimes the output of a command can be long and difficult to read on screen. In cases where you need information about a subset of the total number of available items, you can use filtering to reduce the output to a more manageable size.

Filtering
To reduce the output that is displayed by an svcinfo command, you can specify a number of filters, depending on which svcinfo command you are running. To see which filters are available, type the command followed by the -filtervalue? flag, as shown in Example 7-77.
Example 7-77 svcinfo lsvdisk -filtervalue? command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue? Filters for this view are : name

Chapter 7. SVC operations using the CLI

377

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

id IO_group_id IO_group_name status mdisk_grp_name mdisk_grp_id capacity type FC_id FC_name RC_id RC_name vdisk_name vdisk_id vdisk_UID fc_map_count copy_count

When you know the filters, you can be more selective in generating output: Multiple filters can be combined to create specific searches. You can use an * as a wildcard when using names. When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb. For example, if we issue the svcinfo lsvdisk command with no filters, we see the output shown in Example 7-78 on page 378.
Example 7-78 svcinfo lsvdisk command: no filters

id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,typ e,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count 0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000 000000,0,1 1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000 0000001,0,1 2,vdisk2,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000 000002,0,1 3,vdisk3,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000 000003,0,1 Tip: The -delim : parameter truncates the on screen content and separates data fields with colons as opposed to wrapping text over multiple lines. That is normally used in case you need to grab some reports during script execution. If we now add a filter to our svcinfo command (such as FC_name), we can reduce the output, as shown in Example 7-79.
Example 7-79 svcinfo lsvdisk command: with filter

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=*7 -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count 0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000 000000,0,1 1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000 0000001,0,1 378
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

The first command shows all Virtual Disks (VDisks) with the IO_group_id=0. The second command shows us all VDisks where the mdisk_grp_name ends with a 7. The wildcard * can be used when names are used.

7.7 Managing the cluster using the CLI


In these sections we show how to administer the cluster.

7.7.1 Viewing cluster properties


Use the svcinfo lscluster command to display summary information about all clusters configured to the SVC as shown in Example 7-80.
Example 7-80 svcinfo lscluster command

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote 0000020063E03A38 0000020061006FCA ITSO-CLS2 remote 0000020061006FCA

partnership

bandwidth

fully_configured 20 fully_configured 50

7.7.2 Changing cluster settings


Use the svctask chcluster command to change the settings of the cluster. This command modifies specific features of a cluster. Multiple features can be changed by issuing a single command. If the cluster IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address. The service IP address is not used until a node is removed from the cluster. If this node cannot rejoin the cluster, you can bring the node up in service mode. In this mode, the node can be accessed as a stand-alone node using the service IP address. All command parameters are optional; however, you must specify at least one parameter. Note: Only a user with administrator authority can change the password. After the cluster IP address is changed, you lose the open shell connection to the cluster. You must reconnect with the newly specified IP address.

Attention: Changing the speed on a running cluster breaks I/O service to the attached hosts. Before changing the fabric speed, stop I/O from active hosts and force these hosts to flush any cached data by un-mounting volumes (for UNIX host types) or by removing drive letters (for Windows host types). Some hosts might need to be rebooted to detect the new fabric speed. The fabric speed setting applies only to the 4F2 and 8F2 model nodes in a cluster. The 8F4 nodes automatically negotiate the fabric speed on a per-port basis.

Chapter 7. SVC operations using the CLI

379

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.7.3 Cluster Authentication


A important point with respect to authentication in SVC 5.1 is that the superuser password replaces the previous cluster admin. This user is a member of the Security admin and if this password is not known then it can be reset from the cluster front panel. The authenticate method is described in details in Chapter 2, IBM System Storage SAN Volume Controller overview on page 7. Note: If you do not want the password to display as you enter it on the command line, omit the new password. The command line then prompts you to enter and confirm the password without the password being displayed. The only authentication that can be changed from the chcluster command is the Service account user password and to be able to change that you need to have administrative rights. The Service account user password is changed in Example 7-81.
Example 7-81 svctask chcluster -servicepwd (for the Service account)

IBM_2145:ITSO-CLS1:admin>svctask chcluster -servicepwd Enter a value for -password : Enter password: Confirm password: IBM_2145:ITSO-CLS1:admin> More information about managing users is in Managing users using the CLI on page 393.

7.7.4 iSCSI configuration


Starting with SVC 5.1, iSCSI is introduced as a supported method of communication between the SVC and hosts. All back-end storage and intra-cluster communication still uses Fibre Channel and the SAN so iSCSI cannot be used for that communication. In Chapter 2, IBM System Storage SAN Volume Controller overview on page 7 we described in greater detail how iSCSi works. In this section we will show how to configure our cluster for usage with iSCSI. We will configure our nodes to use the primary and secondary ethernet ports for iSCSI as well as containing the cluster ip. When we are configuring our nodes to be used with iSCSI we are not impacting our cluster ip. The cluster ip is changed as show in 7.7.2, Changing cluster settings on page 379. It is important to know that we can have more than a one ip address to one physical connection relationship. We have the possibility to have a four to one relationship (4:1) consisting of two IPv4 plus two IPv6 addresses (4 total), to one physical connection per port per node. This function is described in Chapter 2, IBM System Storage SAN Volume Controller overview on page 7. Note: When performing reconfiguration of IP ports be aware that already configured iSCSI connections will need to reconnect if changes are made on the ip addresses of the nodes.

380

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

iSCSI authentication or CHAP can be done in two ways, either for the whole cluster itself or per host connection. If the CHAP is to be configured for the whole cluster this is shown in Example 7-82 on page 381.
Example 7-82 setting a CHAP secret for the entire cluster to passw0rd

IBM_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret passw0rd IBM_2145:ITSO-CLS1:admin> In our scenario we have our cluster ip of 9.64.210.64 and that will not be impacted during our configuration of the nodes ip addresses. We start by listing our ports using the svcinfo lsportip command. We can see that we have two ports per node to work with. Both ports can have two ip addresses that can be used for iSCSI. In our example we will configure the secondary port in both nodes in our I/O group. This is shown in Example 7-83.
Example 7-83 Configuring secondary ethernet port on SVC nodes

IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask


255.255.255.0 2

IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask


255.255.255.0 2

If we want this iSCSI port to failover to the other node in the I/O group in case of failure, we need to configure that by using the command svctask chnode with the parameter -failover and use the name or iSCSI alias to identify the node we are assigning as the iSCSI failover node. To display the iSCSI IQN svcinfo lsnode should be issued as shown in Example 7-84.
Example 7-84 svcinfo lsnode command.

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode 1 id 1 name node1 UPS_serial_number 100068A006 WWNN 50050768010027E2 status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node no UPS_unique_id 2040000188440006 port_id 50050768014027E2 port_status active port_speed 4Gb port_id 50050768013027E2 port_status active port_speed 4Gb port_id 50050768011027E2 port_status active port_speed 4Gb port_id 50050768012027E2 port_status active
Chapter 7. SVC operations using the CLI

381

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

port_speed 4Gb hardware 8G4 iscsi_name iqn.1986-03.com.ibm:2145.ITSO-CLS1.node1 iscsi_alias failover_active no failover_name failover_iscsi_name failover_iscsi_alias Each node has a unique IQN that applies to both ports of that node and to activate the failover we issue the command svctask chnode -failover with the node name or the iSCSI alias of the node we want to assign as our failover node as shown in Example 7-85.
Example 7-85 Setting failover node

IBM_2145:ITSO-CLS1:admin>svctask chnode -failover -name node2 1 And now when we show the svcinfo lsnode again we can see that node 2 is our failover node for node 1 as show in Example 7-86.
Example 7-86 Failover node set IBM_2145:ITSO-CLS1:admin>svcinfo lsnode 1 id 1 name node1 UPS_serial_number 100068A006 WWNN 50050768010027E2 status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node no UPS_unique_id 2040000188440006 port_id 50050768014027E2 port_status active port_speed 4Gb port_id 50050768013027E2 port_status active port_speed 4Gb port_id 50050768011027E2 port_status active port_speed 4Gb port_id 50050768012027E2 port_status active port_speed 4Gb hardware 8G4

iscsi_name iqn.1986-03.com.ibm:2145.ITSO-CLS1.node1
iscsi_alias RedNodeofITSOcluster failover_active no failover_name node2 failover_iscsi_name iqn.1986-03.com.ibm:2145.ITSO-CLS1.node2 failover_iscsi_alias NodeBelowRedNode IBM_2145:ITSO-CLS1:admin>

382

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.7.5 Modifying IP addresses


Starting with SVC 5.1 we can use both IP ports of the nodes. However, the first time you configure a second port, all IP information is required since port 1 on the cluster must always have one stack fully configured. There are now two active cluster ports on the configuration node. If the cluster IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address if connected through that port. List the IP address of the cluster by issuing the svcinfo lsclusterip command. Modify the IP address by issuing the svctask chclusterip command. You can either specify a static IP address or have the system assign a dynamic IP address, as shown in Example 7-87.
Example 7-87 svctask chclusterip -clusterip

IBM_2145:ITSO-CLS1:admin>svctask chclusterip -clusterip 10.20.133.5 -gw 10.20.135.1 -mask 255.255.255.0 -port 1 This command changes the current IP address of the cluster to 10.20.133.5. Important: If you specify a new cluster IP address, the existing communication with the cluster through the CLI is broken and the PuTTY application automatically closes. You must relaunch the PuTTY application and point to the new IP address, but your SSH key will still work.

7.7.6 Supported IP address formats


Table 7-1 shows the IP address formats.
Table 7-1 ip_address_list formats IP type IPv4 (no port set, SVC uses default) IPv4 with specific port Full IPv6, default port Full IPv6, default port,leading zeros suppressed Full IPv6 with port Zero-compressed IPv6, default port Zero-compressed IPv6 with port ip_address_list format 1.2.3.4 1.2.3.4:22 1234:1234:0001:0123:1234:1234:1234:1234 1234:1234:1:123:1234:1234:1234:1234 [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23 2002::4ff6 [2002::4ff6]:23

We have now completed the tasks required to change the IP addresses (cluster and service) of the SVC environment.

7.7.7 Setting the cluster time zone and time


Use the -timezone parameter to specify the numeric ID of the time zone that you want to set. Issue the svcinfo lstimezones command to list the time zones that are available on the cluster. A list of valid time zone settings are displayed in a list.

Chapter 7. SVC operations using the CLI

383

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: If you have changed the time zone, you must clear the error log dump directory before you can view the error log through the Web application.

Setting the cluster time zone


Perform the following steps to set the cluster time zone and time: 1. Find out for which time zone your cluster is currently configured. Enter the svcinfo showtimezone command, as shown in Example 7-88.
Example 7-88 svcinfo showtimezone IBM_2145:ITSO-CLS1:admin>svcinfo showtimezone id timezone 522 UTC

2. To find the time zone code that is associated with your time zone, enter the svcinfo lstimezones command, as shown in Example 7-89. A truncated list is provided for this example. If this setting is correct (for example, 522 UTC), you can go to Step 4. If not, continue with Step 3.
Example 7-89 svcinfo lstimezones IBM_2145:ITSO-CLS1:admin>svcinfo lstimezones id timezone . . 507 Turkey 508 UCT 509 Universal 510 US/Alaska 511 US/Aleutian 512 US/Arizona 513 US/Central 514 US/Eastern 515 US/East-Indiana 516 US/Hawaii 517 US/Indiana-Starke 518 US/Michigan 519 US/Mountain 520 US/Pacific 521 US/Samoa 522 UTC . .

3. Now that you know which time zone code is correct for you, set the time zone by issuing the svctask settimezone (Example 7-90) command.
Example 7-90 svctask settimezone IBM_2145:ITSO-CLS1:admin>svctask settimezone -timezone 520

4. Set the cluster time by issuing the svctask setclustertime command (Example 7-91).
Example 7-91 svctask setclustertime IBM_2145:ITSO-CLS1:admin>svctask setclustertime -time 061718402008

The format of the time is MMDDHHmmYYYY.

384

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

You have now completed the tasks necessary to set the cluster time zone and time.

7.7.8 Start statistics collection


Statistics are collected at the end of each sampling period (as specified by the -interval parameter). These statistics are written to a file. A new file is created at the end of each sampling period. Separate files are created for MDisks, VDisks, and node statistics. Use the svctask startstats command to start the collection of statistics, as shown in Example 7-92.
Example 7-92 svctask startstats command

IBM_2145:ITSO-CLS1:admin>svctask startstats -interval 15 The interval we specify (minimum 1, maximum 60) is in minutes. This command starts statistics collection and gathers data at 15 minute intervals. Note: To verify that statistics collection is set, display the cluster properties again, as shown in Example 7-93.
Example 7-93 Statistics collection status and frequency

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status on statistics_frequency 15 --please note that the output has been shortened for easier reading. -We have now completed the tasks required to start statistics collection on the cluster.

7.7.9 Stopping a statistics collection


Use the svctask stopstats command to start the collection of statistics within the cluster (Example 7-94).
Example 7-94 svctask stopstats

IBM_2145:ITSO-CLS1:admin>svctask stopstats This command stops the statistics collection. Do not expect any prompt message from this command. To verify that the statistics collection is stopped, display the cluster properties again, as shown in Example 7-95.
Example 7-95 Statistics collection status and frequency

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status off statistics_frequency 15 --please note that the output has been shortened for easier reading. -Notice that the interval parameter is not changed but the status is off. We have now completed the tasks required to stop statistics collection on our cluster.

Chapter 7. SVC operations using the CLI

385

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.7.10 Status of copy operation


Use the svcinfo lscopystatus command, as shown in Example 7-96, to determine if a file copy operation is in progress or not. Only one file copy operation can be performed at a time. The output of this command is a status of active or inactive.
Example 7-96 lscopystatus command

IBM_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatus status inactive

7.7.11 Shutting down a cluster


If all input power to an SVC cluster is to be removed for more than a few minutes (for example, if the machine room power is to be shut down for maintenance), it is important to shut down the cluster before removing the power. The reason for this is that if the input power is removed from the uninterruptible power supply units without first shutting down the cluster and the uninterruptible power supplies themselves, the uninterruptible power supply units remain operational and eventually become drained of power. When input power is restored to the uninterruptible power supplies, they start to recharge. However, the SVC does not permit any input/output (I/O) activity to be performed to the VDisks until the uninterruptible power supplies are charged enough to enable all the data on the SVC nodes to be destaged in the event of a subsequent unexpected power loss. Recharging the uninterruptible power supply can take as long as two hours. Shutting down the cluster prior to removing input power to the uninterruptible power supply units prevents the battery power from being drained. It also makes it possible for I/O activity to be resumed as soon as input power is restored. You can use the following procedure to shut down the cluster: 1. Use the svctask stopcluster command to shut down your SVC cluster (Example 7-97).
Example 7-97 svctask stopcluster IBM_2145:ITSO-CLS1:admin>svctask stopcluster Are you sure that you want to continue with the shut down?

This command shuts down the SVC cluster. All data is flushed to disk before the power is removed. At this point you lose administrative contact with your cluster, and the PuTTY application automatically closes. 2. You will be presented with the following message: Warning: Are you sure that you want to continue with the shut down? Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy) relationships, data migration operations, and forced deletions before continuing. Entering y to this will execute the command. No feedback is then displayed. Entering anything

386

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

other than y(es) or Y(ES) will result in the command not executing. No feedback is displayed. Important: Before shutting down a cluster, ensure all I/O operations are stopped that are destined for this cluster because you will lose all access to all VDisks being provided by this cluster. Failure to do so can result in failed I/O operations being reported to the host operating systems. There is no need to do this when you shut down a node. Begin the process of quiescing all I/O to the cluster by stopping the applications on the hosts that are using the VDisks provided by the cluster.

3. We have now completed the tasks required to shut down the cluster. To shut down the uninterruptible power supplies, just press the power button on their front panels. Note: To restart the cluster, you must first restart the uninterruptible power supply units by pressing the power button on their front panels. Then you go to the service panel of one of the nodes within the cluster and press the power on button. After it is fully booted up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the panel), you can start the other nodes in the same way. As soon as all nodes are fully booted, you can re-establish administrative contact using PuTTY, and your cluster is fully operational again.

7.8 Nodes
This section details the tasks that can be performed at an individual node level.

7.8.1 Viewing node details


Use the svcinfo lsnode command to view summary information about nodes defined within the SVC environment. To view more details about a specific node, append the node name (for example, SVCNode_1) to the command. Both of these commands are shown in Example 7-98. Tip: The -delim, parameter truncates the on screen content and separates data fields with colons as opposed to wrapping text over multiple lines.
Example 7-98 svcinfo lsnode command

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,node1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,node2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,node3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 4,node4,100066C108,50050768010027E2,online,1,io_grp1,no,20400001864C1008,8G4 IBM_2145:ITSO-CLS1:admin>svcinfo lsnode node1 id 1 name node1 UPS_serial_number 1000739007 WWNN 50050768010037E5
Chapter 7. SVC operations using the CLI

387

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node yes UPS_unique_id 20400001C3240007 port_id 50050768014037E5 port_status active port_speed 4Gb port_id 50050768013037E5 port_status active port_speed 4Gb port_id 50050768011037E5 port_status active port_speed 4Gb port_id 50050768012037E5 port_status active port_speed 4Gb hardware 8G4

7.8.2 Adding a node


After cluster creation is completed through the service panel (the front panel of one of the SVC nodes) and cluster Web interface, only one node (the configuration node) is set up. To have a fully functional SVC cluster, you must add a second node to the configuration. To add a node to a cluster, gather the necessary information, as explained in these steps: Before you can add a node, you must know which unconfigured nodes you have as candidates. You can find this out by issuing the svcinfo lsnodecandidate command (Example 7-99). You will need to specify which I/O group you are adding the node to. If you enter the command svcinfo lsnode you can easily identify the I/O group id of the group you are adding your node to as shown in Example 7-100.
Example 7-99 svctask lsnodecandidate

IBM_2145:ITSO-CLS1:admin>svcinfo lsnodecandidate id panel_name UPS_serial_number


50050768010027E2 108283 50050768010037DC 104603 100066C108 1000739004

UPS_unique_id
20400001864C1008 8G4 20400001C3240004 8G4

hardware

Note: The node you want to add must be on a different UPS serial number than the UPS on the first node.
Example 7-100 svcinfo lsnode

id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware,iscsi_name,iscsi_alias 1,ITSO_CLS1_0,100089J040,50050768010059E7,online,0,io_grp0,yes,2040000209680100,8G 4,iqn.1986-03.com.ibm:2145.ITSO_CLS1_0.ITSO_CLS1_0_N0,

388

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Now that we know the available nodes, we can use the svctask addnode command to add the node to the SVC cluster configuration. The command to add a node to the SVC cluster is shown in Example 7-101.
Example 7-101 svctask addnode (wwnodename)

IBM_2145:ITSO-CLS1:admin>svctask addnode -wwnodename 50050768010027E2 -name Node2 -iogrp io_grp0 Node, id [2], successfully added This command adds the candidate node with the wwnodename 50050768010027E2 to the I/O group called io_grp0. We used the -wwnodename parameter (50050768010027E2), but we could have used the -panelname parameter (108283) instead (Example 7-102) since if you are standing in front of the node it is easier to read the panelname instead of the WWNN.
Example 7-102 svctask addnode (panelname)

IBM_2145:ITSO-CLS1:admin>svctask addnode -panelname 108283 -name Node2 -iogrp io_grp0 We also used the optional -name parameter (Node2). If you do not provide the -name parameter, the SVC automatically generates the name nodeX (where X is the ID sequence number assigned by the SVC internally). Note: If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word node, since this prefix is reserved for SVC assignment only. If the svctask addnode command returns no information and your second node is powered on and the zones are correctly defined, then pre-existing cluster configuration data can be stored in it. If you are sure this node is not part of another active SVC cluster, you can use the service panel to delete the existing cluster information. After this is complete, re-issue the svcinfo lsnodecandidate command and you should see it listed.

7.8.3 Renaming a node


Use the svctask chnode command to rename a node within the SVC cluster configuration.
Example 7-103 svctask chnode -name

IBM_2145:ITSO-CLS1:admin>svctask chnode -name ITSO_CLS1_Node1 4 This command renames node id 4 to ITSO_CLS1_Node1. Note: The chnode command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word node, since this prefix is reserved for SVC assignment only.

Chapter 7. SVC operations using the CLI

389

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.8.4 Deleting a node


Use the svctask rmnode command to remove a node from the SVC cluster configuration (Example 7-101).
Example 7-104 svctask rmnode

IBM_2145:ITSO-CLS1:admin>svctask rmnode node4 This command removes node4 from the SVC cluster. Since node4 was also the configuration node, the SVC transfers the configuration node responsibilities to a surviving node, within the I/O group. Unfortunately the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses communication and closes automatically. We must restart the PuTTY application to establish a secure session with the new configuration node. Important: If this is the last node in an I/O Group, and there are VDisks still assigned to the I/O Group, the node will not be deleted from the cluster. If this is the last node in the cluster, and the I/O Group has no Virtual Disks remaining, the cluster will be destroyed and all virtualization information will be lost. Any data that is still required should be backed up or migrated prior to destroying the cluster.

7.8.5 Shutting down a node


On occasion it can be necessary to shut down a single node within the cluster, to perform such tasks as scheduled maintenance, while leaving the SVC environment up and running. Use the svctask stopcluster -node command, as shown in Example 7-105, to shut down a single node.
Example 7-105 svctask stopcluster -node command

IBM_2145:ITSO-CLS1:admin>svctask stopcluster -node n4 Are you sure that you want to continue with the shut down? This command shuts down node n4 in a graceful manner. When this is done, the other node in the I/O Group will destage the contents of its cache and will go into write through mode until the node is powered up and rejoins the cluster. Note: There is no need to stop FlashCopy mappings, Remote Copy relationships, and data migration operations. The other cluster will handle this, but be aware that this cluster is a single point of failure now. If this is the last node in an I/O Group, all access to the Virtual Disks in the I/O Group will be lost. Ensure that this is what you want to do before executing this command and you will need to specify the -force flag. By re-issuing the svcinfo lsnode command (Example 7-106), we can see that the node is now offline.
Example 7-106 svcinfo lsnode

390

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,n1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,n2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,n3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 6,n4,100066C108,0000000000000000,offline,1,io_grp1,no,20400001864C1008,unknown IBM_2145:ITSO-CLS1:admin>svcinfo lsnode n4 CMMVC5782E The object specified is offline. Note: To restart the node: physically, from the service panel of the node, push the power on button. We have now completed the tasks required to view, add, delete, rename, and shut down a node within an SVC environment.

7.9 I/O groups


This section explains the tasks that you can perform at an I/O group level.

7.9.1 Viewing I/O group details


Use the svcinfo lsiogrp command, as shown in Example 7-107, to view information about I/O groups defined within the SVC environment.
Example 7-107 I/O group details

IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrp id name node_count 0 io_grp0 2 1 io_grp1 2 2 io_grp2 0 3 io_grp3 0 4 recovery_io_grp 0

vdisk_count 3 4 0 0 0

host_count 3 3 2 2 0

As we can see, the SVC predefines five I/O groups. In a four node cluster (like ours), only two I/O groups are actually in use. The other I/O groups (io_grp2 and io_grp3) are for a six or eight node cluster. The recovery I/O group is a temporary home for VDisks when all nodes in the I/O group that normally owns them have suffered multiple failures. This allows us to move the VDisks to the recovery I/O group and then into a working I/O group. Of course, while temporarily assigned to the recovery I/O group, I/O access is not possible.

7.9.2 Renaming an I/O group


Use the svctask chiogrp command to rename an I/O group (Example 7-108).
Example 7-108 svctask chiogrp

IBM_2145:ITSO-CLS1:admin>svctask chiogrp -name io_grpA io_grp1 This command renames the I/O group io_grp1 to io_grpA.
Chapter 7. SVC operations using the CLI

391

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: The chiogrp command specifies the new name first. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the dash -, and the underscore _. It can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word iogrp, since this prefix is reserved for SVC assignment only. To see whether the renaming was successful, issue the svcinfo lsiogrp command again and you should see the change reflected. We have now completed the tasks required to rename an I/O group.

7.9.3 Adding and removing hostiogrp


To map or unmap a specific host object to a specific I/O group in order to reach the maximum hosts supported by an SVC cluster, use the svctask addhostiogrp command to map a specific host to a specific I/O group, as shown in Example 7-109.
Example 7-109 svctask addhostiogrp

IBM_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specifies a list of one or more I/O groups that must be mapped to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall specifies that all the I/O groups must be mapped to the specified host. This parameter is mutually exclusive with -iogrp. -host host_id_or_name Identify the host either by ID or name to which the I/O groups must be mapped. Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O group, as shown in Example 7-110.
Example 7-110 svctask rmhostiogrp command

IBM_2145:ITSO-CLS1:admin>svctask rmhostiogrp -iogrp 0 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specifies a list of one or more I/O groups that must be unmapped to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall specifies that all the I/O groups must be unmapped to the specified host. This parameter is mutually exclusive with -iogrp. -force If the removal of a host to I/O group mapping will result in the loss of VDisk to host mappings, then the command must fail if the -force flag has not been used. The -force flag will, however, override such behavior and force the host to I/O group mapping to be deleted. host_id_or_name Identify the host either by ID or name to which the I/O groups must be mapped.

392

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.9.4 Listing I/O groups


To list all the I/O groups mapped to the specified host and vice versa, use the svcinfo lshostiogrp command specifying the host name Kanaga, as shown in Example 7-111.
Example 7-111 svcinfo lshostiogrp

IBM_2145:ITSO-CLS1:admin>svcinfo lshostiogrp Kanaga id name 1 io_grp1 To list all the host objects mapped to the specified I/O group, use the svcinfo lsiogrphost command, as shown in Example 7-112.
Example 7-112 svcinfo lsiogrphost

IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1 id name 1 Nile 2 Kanaga 3 Siam Where iogrp_1 is the I/0 group name.

7.10 Managing authentication


In the topics that follow we show how to administer authentication.

7.10.1 Managing users using the CLI


In this section we will demonstrate how to operate and manage authentication using the CLI. All users must now be a member of a predefined user group. A list of those groups can be listed by using svcinfo lsusergrp command as shown in Example 7-113.
Example 7-113 svcinfo lsusergrp

IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor

lsusergrp remote no no no no no

An example of the simple creation of a user is shown in Example 7-114. User John is added to the user group Monitor with the password m0nitor.

Chapter 7. SVC operations using the CLI

393

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 7-114 svctask mkuser called John with password m0nitor.

IBM_2145:ITSO-CLS1:admin>svctask mkuser -name John -usergrp Monitor -password m0nitor User, id [2], successfully created IBM_2145:ITSO-CLS1:admin> Local users are those users not authenticated by a remote authentication server. Remote users are those users that are authenticated by a remote central registry server The user groups have already a defined authority role as shown in Table 7-2.
Table 7-2 Authority roles User group Security Admin Administrator Role All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, settime All svcinfo commands. svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser svcconfig: backup User Superusers Administrators that control the SVC.

Copy Operator

For those that control all copy functionality of the cluster.

Service

Is for those that perform service maintenance and other hardware tasks on the cluster.

Monitor

For those only needing view access.

394

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.10.2 Managing user roles and groups


Role-based security commands are used to restrict the administrative abilities of a user. We cannot create new user roles but we can create new user groups and assign a pre-defined role to our group. To view the user roles on your cluster use the svcinfo lsusergrp command, as shown in Example 7-115, to list all users.
Example 7-115 svcinfo lsusergrp

IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor

lsusergrp remote no no no no no

To view our current defined users and to what user groups they belong to, we use the command svcinfo lsuser as show in Example 7-116 on page 395.
Example 7-116 svcinfo lsuser

IBM_2145:ITSO-CLS2:admin>svcinfo lsuser -delim , id,name,password,ssh_key,remote,usergrp_id,usergrp_name 0,superuser,yes,no,no,0,SecurityAdmin 1,admin,no,yes,no,0,SecurityAdmin 2,Pall,yes,no,no,1,Administrator

7.10.3 Changing a user


To change user passwords, issue the svctask chuser command. To change the Service account user password see 7.7.3, Cluster Authentication on page 380. The chuser command allows you to modify a user that is already created. You can rename, assign a new password (if you are logged on with administrative privileges), move a user from one user group to another but be aware that a member can only be a member of one group at a time.

7.10.4 Audit Log command


The audit log can be very helpful to see what commands have been entered on our cluster. Most action commands issued by the old or new CLI are recorded in the audit log. The native GUI performs actions by using the CLI programs The SVC Console performs actions by issuing CIM commands to the CIMOM which then runs CLI programs.

Chapter 7. SVC operations using the CLI

395

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

This means that actions performed using both the native GUI and the SVC Console are recorded in the audit log. There are some commands that are not audited as show in the following list: svctask svctask svctask svctask svctask cpdumps cleardumps finderr dumperrlog dumpinternallog

The audit log contains around 1 MB of data which can contain about 6000 average length commands. When this log is full the cluster copies it to a new file in the /dumps/audit directory on the config node and resets the in-memory audit log. To display entries from the audit log use the svcinfo catauditlog -first 5 command to return a list of five in-memory Audit Log entries, as shown in Example 7-117, svcinfo catauditlog command.
Example 7-117 catauditlog command

IBM_2145:ITSO-CLS1:admin>svcinfo catauditlog -first 5 -delim , 291,090904200329,superuser,10.64.210.231,0,,svctask mkvdiskhostmap -host 1 21 292,090904201238,admin,10.64.210.231,0,,svctask chvdisk -name swiss_cheese 21 293,090904204314,superuser,10.64.210.231,0,,svctask chhost -name ITSO_W2008 1 294,090904204314,superuser,10.64.210.231,0,,svctask chhost -mask 15 1 295,090904204410,admin,10.64.210.231,0,,svctask chvdisk -name SwissCheese 21

If you need to dump the contents of the in-memory audit log to a file on the current configuration node, use the command svctask dumpauditlog. This command does not provide any feedback, just the prompt. To obtain a list of the audit log dumps, use svcinfo lsauditlogdumps, as described in Example 7-118.
Example 7-118 svctask dumpauditlog / svcinfo lsauditlogdumps command

IBM_2145:ITSO-CLS1:admin>svctask dumpauditlog IBM_2145:ITSO-CLS1:admin>svcinfo lsauditlogdumps id auditlog_filename 0 auditlog_0_80_20080619134139_0000020060c06fca

7.11 Managing Copy Services


In these topics we show how to manage copy services.

7.11.1 FlashCopy operations


In this section, we use a scenario to illustrate how to use commands with PuTTY to perform FlashCopy. Refer to the IBM System Storage SAN Volume Controller: Command-Line Interface Users Guide for more commands, which is available at: http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=7&bra ndind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1

396

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Scenario description
We use the following scenario in both the command line section and the GUI section. In the following scenario, we want to FlashCopy the following VDisks: DB_Source Log_Source App_Source Database files Database log files Application files

Since data integrity must be kept on DB_Source and Log_Source, we create consistency groups to handle the FlashCopy of DB_Source and Log_Source. In our scenario, the application files are independent of the database, so we create a single FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and Log_Source, and thereby two consistency groups. The scenario is shown in Example 7-126 on page 402.

Figure 7-2 FlashCopy scenario

7.11.2 Setting up FlashCopy


We have already created the source and target VDisks, and the source and target VDisks are identical in size, which is a requirement of the FlashCopy function: DB_Source, DB_Target1, and DB_Target2 Log_Source, Log_Target1, and Log_Target2 App_Source and App_Target1 To set up the FlashCopy, we performed the following steps: 1. Create a FlashCopy consistency group: Named FCCG1 Named FCCG2 2. Create FlashCopy mapping for Source VDisks: DB_Source FlashCopy to DB_Target1, the mapping name is DB_Map1 DB_Source FlashCopy to DB_Target2, the mapping name is DB_Map2 Log_Source FlashCopy to Log_Target1, the mapping name is Log_Map1
Chapter 7. SVC operations using the CLI

397

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Log_Source FlashCopy to Log_Target2, the mapping name is Log_Map2 App_Source FlashCopy to App_Target1, the mapping name is App_Map1 Copyrate 50

7.11.3 Creating a FlashCopy consistency group


To create a FlashCopy consistency group, we use the command svctask mkfcconsistgrp to create a new consistency group. The ID of the new group is returned. If you have created several FlashCopy mappings for a group of VDisks that contain elements of data for the same application, you may find it convenient to assign these mappings to a single FlashCopy consistency group. Then you can issue a single prepare or start command for the whole group, so that, for example, all the files for a particular database are copied at the same time. In Example 7-119, the consistency group FCCG1 and FCCG2 are created, which will hold the FlashCopy maps of DB and Log together. This is a very important step in doing FlashCopy on database applications. It helps to keep data integrity during FlashCopy.
Example 7-119 Creating two FlashCopy consistency groups

IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG1 FlashCopy Consistency Group, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG2 FlashCopy Consistency Group, id [2], successfully created In Example 7-120, we checked the status of consistency groups. Each has a status of empty.
Example 7-120 Checking the status

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 empty 2 FCCG2 empty If you would like to change the name of a consistency group, you can use the svctask chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.

7.11.4 Creating a FlashCopy mapping


To create a FlashCopy mapping, we use the command svctask mkfcmap. This command creates a new FlashCopy mapping, which maps a source virtual disk to a target virtual disk to prepare for subsequent copying. When executed, this command creates a new FlashCopy mapping logical object. This mapping persists until it is deleted. The mapping specifies the source and destination virtual disks. The destination must be identical in size to the source, or the mapping will fail. Issue the command svcinfo lsvdisk -bytes to find the exact size of the source VDisk for which you want to create a target disk of the same size. In a single mapping, source and destination cannot be on the same VDisk. A mapping is triggered at the point in time when the copy is required. The mapping can optionally be given a name and assigned to a consistency group. These are groups of mappings that can be triggered at the same time. This enables multiple virtual disks to be copied at the same time, which creates a consistent copy of multiple disks. This is required for database products in which the database and log files reside on different disks.

398

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

If no consistency group is defined, the mapping is assigned into the default group 0. This is a special group that cannot be started as a whole. Mappings in this group can only be started on an individual basis. The background copy rate specifies the priority that should be given to completing the copy. If 0 is specified, the copy will not proceed in the background. The default is 50. Tip: There is a parameter to delete FlashCopy mappings automatically after completion of a background copy (when the mapping gets to the idle_or_copied state). Use the command: svctask mkfcmap -autodelete This command does not delete mappings in cascade with dependent mappings because it would not get to the idle_or_copied state. In Example 7-121, the first FlashCopy mapping for DB_Source and Log_Source is created.
Example 7-121 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1 -name DB_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target_1 -name Log_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Target_1 -name App_Map1 FlashCopy Mapping, id [2], successfully created Example 7-122 shows the command to create a second FlashCopy mapping for VDisk DB_Source and Log_Source.
Example 7-122 Create additional FlashCopy mappings

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [3], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [4], successfully created Example 7-123 shows the result of these FlashCopy mappings. The status of the mapping is idle_or_copied.
Example 7-123 Check the result of Multi-Target FlashCopy mappings

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 idle_or_copied 0 50 100 off
Chapter 7. SVC operations using the CLI

no 399

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

1 Log_Map1 1 Log_Source Log_Target_1 1 FCCG1 idle_or_copied 50 100 off 2 App_Map1 2 App_Source App_Target_1 idle_or_copied 50 100 off 3 DB_Map2 0 DB_Source DB_Target_2 2 FCCG2 idle_or_copied 50 100 off 4 Log_Map2 1 Log_Source Log_Target_2 2 FCCG2 idle_or_copied 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied

4 0 no 3 0 no 7 0 no 5 0 no

If you would like to change the FlashCopy mapping, you can use the svctask chfcmap command. Type svctask chfcmap -h to get help with this command.

7.11.5 Preparing (pre-triggering) the FlashCopy mapping


At this point, the mapping has been created, but the cache is still accepting data for the source VDisks. You can only trigger the mapping when the cache does not contain any data for FlashCopy source VDisks. You must issue an svctask prestartfcmap command to prepare a FlashCopy mapping to start. This command tells SVC to flush the cache of any content for the source VDisk and to pass through any further write data for this VDisk. When svctask prestartfcmap is executed, the mapping enters the preparing state. After the preparation is complete, it changes to the prepared state. At this point, the mapping is ready for triggering. Preparing and the subsequent triggering is usually performed on a consistency group basis. Only mappings belonging to consistency group 0 can be prepared on their own, since consistency group 0 is a special group, which contains the FlashCopy mappings that do not belong to any consistency group. A FlashCopy must be prepared before it can be triggered. In our scenario, App_Map1 is not in a consistency group. In Example 7-124, we show how we initialize the preparation for App_Map1. Another option is that you add the -prep parameter to the svctask startfcmap command, which will first prepare the mapping, then start the FlashCopy. In the example, we also show how to check the status of the current FlashCopy mapping. App_Map1s status is prepared.
Example 7-124 Prepare a FlashCopy without consistency group

IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id 400
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

group_name status prepared progress 0 copy_rate 50 start_time dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

7.11.6 Preparing (pre-triggering) the FlashCopy consistency group


We use the command svctask prestartfcconsistsgrp to prepare a FlashCopy consistency group. As with 7.11.5, Preparing (pre-triggering) the FlashCopy mapping on page 400, this command flushes the cache of any data destined for the source VDisks and forces the cache into the write-through mode until the mapping is started. The difference is that this command prepares a group of mappings (at a consistency group level) instead of one mapping. When you have assigned several mappings to a FlashCopy consistency group, you only have to issue a single prepare command for the whole group to prepare all the mappings at once. Example 7-125 shows how we prepare the consistency groups for DB and Log and check the result. After the command has executed all the FlashCopy maps we have, all of them are in prepared status, and all the consistency groups are in prepared status too. Now we are ready to start the FlashCopy.
Example 7-125 Prepare a FlashCopy with consistency group

IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status prepared autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2

prestartfcconsistgrp FCCG1 prestartfcconsistgrp FCCG2 lsfcconsistgrp FCCG1

lsfcconsistgrp status prepared prepared

Chapter 7. SVC operations using the CLI

401

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.11.7 Starting (triggering) FlashCopy mappings


The command svctask startfcmap is used to start a single FlashCopy mapping. When invoked, a PiT copy of the source VDisk is created on the target VDisk. When the FlashCopy mapping is triggered, it enters the copying state. The way the copy proceeds depends on the background copy rate attribute of the mapping. If the mapping is set to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the destination. This scenario is suggested to be used as a backup copy while the mapping exists in the copying state. If the copy is stopped, the destination will not be usable. If you want to end up with a duplicate copy of the source at the destination, you should set the background copy rate greater than 0. This means that the system copies all the data (even unchanged data) to the destination and eventually reaches the idle or copied state. After this data is copied, you can delete the mapping and have a usable point-in-time copy of the source at the destination. Example 7-126. After the FlashCopy is started, App_Map1 changes to copying status.
Example 7-126 Start App_Map1

IBM_2145:ITSO-CLS1:admin>svctask startfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 prepared 0 50 100 off 1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 prepared 0 50 100 off 2 App_Map1 2 App_Source 3 App_Target_1 copying 0 50 100 off 3 DB_Map2 0 DB_Source 7 DB_Target_2 2 FCCG2 prepared 0 50 100 off 4 Log_Map2 1 Log_Source 5 Log_Target_2 2 FCCG2 prepared 0 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status copying progress 29 copy_rate 50 start_time 090826171647 dependent_mappings 0 autodelete off clean_progress 100 402
SAN Volume Controller V5.1

no

no

no

no

no

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

7.11.8 Starting (triggering) FlashCopy consistency group


We execute the command svctask startfcconsistgrp, as shown in Example 7-127, and afterwards the database can be resumed. We have created two PiT consistent copies of the DB and Log VDisks. After execution, the status of consistency group and FlashCopy maps are all in copying status.
Example 7-127 Start FlashCopy consistency group

IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status copying autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2

startfcconsistgrp FCCG1 startfcconsistgrp FCCG2 lsfcconsistgrp FCCG1

lsfcconsistgrp status copying copying

7.11.9 Monitoring the FlashCopy progress


To monitor the background copy progress of the FlashCopy mappings, we issue the command svcinfo lsfcmapprogress for each FlashCopy mapping. Alternatively, the copy progress can also be queried using the command svcinfo lsfcmap. As shown in Example 7-128, both DB_Map1 and Log_Map1 return that the background copy is 21% completed, and both DB_Map2 and Log_Map2 return that the background copy is 18% completed.
Example 7-128 Monitoring background copy progress

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map1 id progress 0 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map1 id progress 1 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map2 id progress

Chapter 7. SVC operations using the CLI

403

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

4 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map2 id progress 3 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress App_Map1 id progress 2 53 When the background copy has completed, the FlashCopy mapping enters the idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this status, the consistency group will be at idle_or_copied status. When in this state, the FlashCopy mapping can be deleted, and the target disk can be used independently, if, for example, another target disk is to be used for the next FlashCopy of the particular source VDisk.

7.11.10 Stopping the FlashCopy mapping


The command svctask stopfcmap is used to stop a FlashCopy mapping. This command allows you to stop an active (copying) or suspended mapping. When executed, this command stops a single FlashCopy mapping. When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC. The FlashCopy mapping needs to be prepared again, or re-triggered to bring the target VDisk online again. Tip: In a multi-target FlashCopy environment, if you want to stop a mapping or group, consider whether you want to keep any of the dependent mappings. If not, you should issue the stop command with the force parameter, which will stop all of the dependent maps and negate the need for the stopping copy process to run.

Note: Stopping a FlashCopy mapping should only be done when the data on the target VDisk is not in use, or you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC, if the mapping is in the copying state and progress!=100. Example 7-129 shows how to stop App_Map1 FlashCopy. The status of App_Map1 has changed to idle_or_copied.
Example 7-129 Stop APP_Map1 FlashCopy

IBM_2145:ITSO-CLS1:admin>svctask stopfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status idle_or_copied progress 100 copy_rate 50 404
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

start_time 090826171647 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

7.11.11 Stopping the FlashCopy consistency group


The command svctask stopfcconsistgrp is used to stop any active FlashCopy consistency group. It stops all mappings in a consistency group. When a FlashCopy consistency group is stopped for all mappings that are not 100% copied, the target VDisks become invalid and are set offline by the SVC. The FlashCopy consistency group needs to be prepared again and restarted to bring the target VDisks online again. Note: Stopping a FlashCopy mapping should only be done when the data on the target VDisk is not in use, or you want to modify the FlashCopy consistency group. When a consistency group is stopped, the target VDisk might become invalid and set offline by the SVC depending on the state of the mapping. As shown in Example 7-130, we stop FCCG1 and FCCG2 consistency groups. The status of the two consistency groups has changed to stopped. Most of the FC mapping relations now have the status stopped. As you can see, some of them have already completed the copy operation and are now in a status of idle_or_copied.
Example 7-130 Stop FCCG1 and FCCG2 consistency group

IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 stopped 2 FCCG2 stopped IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap -delim , id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_ id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p artner_FC_name,restoring 0,DB_Map1,0,DB_Source,6,DB_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 1,Log_Map1,1,Log_Source,4,Log_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 2,App_Map1,2,App_Source,3,App_Target_1,,,idle_or_copied,100,50,100,off,,,no 3,DB_Map2,0,DB_Source,7,DB_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no 4,Log_Map2,1,Log_Source,5,Log_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no

Chapter 7. SVC operations using the CLI

405

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.11.12 Deleting the FlashCopy mapping


To delete a FlashCopy mapping, we use the command svctask rmfcmap. When the command is executed, it attempts to delete the FlashCopy mapping specified. If the FlashCopy mapping is stopped, the command fails unless the -force flag is specified. If the mapping is active (copying), then it must first be stopped before it can be deleted. Deleting a mapping only deletes the logical relationship between the two VDisks. However, when issued on an active FlashCopy mapping using the -force flag, the delete renders the data on the FlashCopy mapping target VDisk as inconsistent. Tip: If you want to use the target VDisk as normal VDisks, monitor the background copy progress until it is complete (100% copied), and then delete the FlashCopy mapping. Another option is to set -autodelete option when creating the FlashCopy mapping. As shown in Example 7-131, we delete App_Map1.
Example 7-131 Delete App_Map1

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap App_Map1

7.11.13 Deleting the FlashCopy consistency group


The command svctask rmfcconsistgrp is used to delete a FlashCopy consistency group. When executed, this command deletes the consistency group specified. If there are mappings that are members of the group, the command fails unless the -force flag is specified. If you want to delete all the mappings in the consistency group as well, you must first delete the mappings, and then delete the consistency group. As shown in Example 7-132, we delete all the maps and consistency groups, and then we check the result.
Example 7-132 remove fcmaps and fcconsistgrp

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap IBM_2145:ITSO-CLS1:admin>

406

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.11.14 Migrate a VDisk to a Space-Efficient VDisk


Use the following scenario to migrate a VDisk to a Space-Efficient VDisk. 1. Create a space-efficient target VDisk with exactly the same size as the VDisk you want to migrate. Example 7-133 on page 407 shows the VDisk 8 details. It has been created as a space-efficient VDisk with the same size of App_Souce VDisk.
Example 7-133 svcinfo lsvdisk 8

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8 id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 221.17MB free_capacity 220.77MB overallocation 462 autoexpand on warning 80

Chapter 7. SVC operations using the CLI

407

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

grainsize 32 2. Define a FlashCopy mapping in which the non-space-efficient VDisk is the source and the space-efficient VDisk is the target. Specify a copy_rate as high as possible and activate the autodelete option for the mapping. See Example 7-134.
Example 7-134 svctask mkfcmap

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Source_SE -name MigrtoSEV -copyrate 100 -autodelete FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0 id 0 name MigrtoSEV source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status idle_or_copied progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 3. Run the svctask prestartfcmap command and the svcinfo lsfcmap MigrtoSEV command, as shown in Example 7-135.
Example 7-135 svctask prestartfcmap

IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap MigrtoSEV IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV id 0 name MigrtoSEV source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status prepared progress 0 copy_rate 100 start_time 408
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 4. Run the svctask startfcmap command, as shown in Example 7-136.
Example 7-136 svctask startfcmap IBM_2145:ITSO-CLS1:admin>svctask startfcmap MigrtoSEV

5. Monitor the copy process using the svcinfo lsfcmapprogress command, as shown in Example 7-137.
Example 7-137 svcinfo lsfcmapprogress IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress MigrtoSEV id progress 0 63

6. The FlashCopy mapping has been deleted automatically, as shown in Example 7-138.
Example 7-138 svcinfo lsfcmap

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV id 0 name MigrtoSEV source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status copying progress 73 copy_rate 100 start_time 090827095354 dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

Chapter 7. SVC operations using the CLI

409

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV CMMVC5754E The specified object does not exist, or the name supplied does not meet the naming rules. An independent copy of the source VDisk (App_Source) has been created. The migration has completed, as shown in Example 7-139.
Example 7-139 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk App_Source_SE id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.77MB overallocation 99 autoexpand on warning 80 grainsize 32

410

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Note: Independent of what you defined as the real size of the target SEV, the real size will be at least the capacity of the source VDisk. To migrate a space-efficient VDisk to a fully allocated VDisk, you can follow the same scenario.

7.11.15 Reverse FlashCopy


Starting with SVC 5.1 it is possible to have a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning. In Example 7-140, FCMAP0 is the forward FlashCopy mapping, and FCMAP0_rev is a reverse FlashCopy mapping. Its source is FCMAP0's target, and its target is FCMAP0's source. When starting a reverse FlashCopy mapping, the -restore option must be used to indicate that the user wishes to overwrite the data on the source disk of the forward mapping.
Example 7-140 Reverse FlashCopy

IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk0 -target vdsk1 -name FCMAP0 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep FCMAP0 IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk1 -target vdsk0 -name FCMAP0_rev FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep -restore FCMAP0_rev id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:group_ id:group_name:status:progress:copy_rate:clean_progress:incremental:partner_FC_id:p artner_FC_name:restoring 0:FCMAP0:75:vdsk0:76:vdsk1:::copying:0:10:99:off:1:FCMAP0_rev:no 1:FCMAP0_rev:76:vdsk1:75:vdsk0:::copying:99:50:100:off:0:FCMAP0:yes FCMAP0_rev will show a restoring value of "yes" while the FlashCopy mapping is copying. Once it has finished copying, the restoring value field will change to "no".

7.11.16 Split-stopping of FlashCopy maps


The stopfcmap command now has a -split option. This allows the source target of a map which is 100% complete to be removed from the head of a cascade, when the map is stopped. For example, if we have four VDisks in a cascade (A -> B -> C -> D), and the map A-> B is 100% complete, stopfcmap -split mapAB will result in mapAB becoming idle_copied and the remaining cascade will become B->C->D. Without the -split option, VDisk A would remain at the head of the cascade (A -> C -> D). Consider this sequence of steps: User takes a backup using the mapping A-> B. A is the production VDisk; B is a backup. At some later point the user suffers corruption on A, and so reverses the mapping B -> A. The user then takes another backup from the production disk A, and so has the cascade B-> A -> C.
Chapter 7. SVC operations using the CLI

411

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Stopping A-> B without -split will result in the cascade B->C. Note that the backup disk B is now at the head of this cascade. When the user next wants to take a backup to B, they can still start mapping A->B (using the -restore flag), but they cannot then reverse the mapping to A (B->A or C-> A). Stopping A-> B with -split would have resulted in the cascade A -> C. This does not result in the same problem, because the production disk A is at the head of the cascade instead of the backup disk B.

7.12 Metro Mirror operation


Note: This example is for intercluster only. If you want to set up intracluster, we highlight those parts of the following procedure that you do not need to perform. In the following scenario, we will be setting up an intercluster Metro Mirror relationship between the SVC cluster ITSO-CLS1 primary site and SVC cluster ITSO-CLS2 at the secondary site. Details of the VDisks are shown in Table 7-3.
Table 7-3 VDisk details Content of Vdisk Database Files Database Log Files Application Files VDisks at primary site MM_DB_Pri MM_DBLog_Pri MM_App_Pri VDisks at secondary site MM_DB_Sec MM_DBLog_Sec MM_App_Sec

Since data consistency is needed across the VDisks MM_DB_Pri and MM_DBLog_Pri, a consistency group CG_WIN2K3_MM is created to handle Metro Mirror relationships for them. While, in this scenario, application files are independent of the database, a stand-alone Metro Mirror relationship is created for the VDisk MM_App_Pri. The Metro Mirror setup is illustrated in Figure 7-3.

412

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Figure 7-3 Metro Mirror scenario

7.12.1 Setting up Metro Mirror


In the following section, we assume that the source and target VDisks have already been created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate. To set up the Metro Mirror, the following steps must be performed: 1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters. 2. Create a Metro Mirror consistency group: Name CG_W2K3_MM 3. Create the Metro Mirror relationship for MM_DB_Pri: Master MM_DB_Pri Auxiliary MM_DB_Sec Auxiliary SVC cluster ITSO-CLS2 Name MMREL1 Consistency group CG_W2K3_MM 4. Create the Metro Mirror relationship for MM_DBLog_Pri: Master MM_DBLog_Pri Auxiliary MM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS2 Name MMREL2 Consistency group CG_W2K3_MM 5. Create the Metro Mirror relationship for MM_App_Pri: Master MM_App_Pri Auxiliary MM_App_Sec Auxiliary SVC cluster ITSO-CLS2 Name MMREL3
Chapter 7. SVC operations using the CLI

413

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

In the following section, each step is carried out using the CLI.

7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2


We create the SVC partnership on both clusters. Note: If you are creating an intracluster Metro Mirror, do not perform the next step; instead, go to Creating a Metro Mirror Consistency Group on page 415.

Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. As shown in Example 7-141, ITSO-CLS2 is an eligible SVC cluster candidate at ITSO-CLS1, for the SVC cluster partnership, and vice-versa. This confirms that both clusters are communicating with each other.
Example 7-141 Listing the available SVC cluster for partnership

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2 Example 7-142 shows the output of the svcinfo lscluster command, before setting up the Metro Mirror relationship. We show it so you can compare with the same relationship after setting up the Metro Mirror relationship.
Example 7-142 Pre-verification of cluster configuration

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38

partnership

bandwidth

partnership

bandwidth

Partnership between clusters


In Example 7-143, a partnership is created between ITSO-CLS1 and ITSO-CL4, specifying 50 MBps bandwidth to be used for the background copy.

414

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

To check the status of the newly created partnership, issue the command svcinfo lscluster. Also notice that the new partnership is only partially configured. It will remain partially configured until the Metro Mirror relationship is created on the other node.
Example 7-143 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying partnership

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote fully_configured 50 0000020063E03A38 In Example 7-144, the partnership is created between ITSO-CLS4 back to ITSO-CLS1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both clusters by re-issuing the svcinfo lscluster command.
Example 7-144 Creating the partnership from ITSO-CLS2 to ITSO-CLS1 and verifying partnership

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 50 000002006AE04FC4

7.12.3 Creating a Metro Mirror Consistency Group


In Example 7-145, we create the Metro Mirror consistency group using the svctask mkrcconsistgrp command. This consistency group will be used for the Metro Mirror relationships of database VDisks, namely MM_DB_Pri and MM_DBLog_Pri, and is named CG_W2K3_MM.
Example 7-145 Creating the Global Mirror consistency group CG_W2K3_MM

IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_MM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type 0 CG_W2K3_MM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group

Chapter 7. SVC operations using the CLI

415

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.12.4 Creating the Metro Mirror relationships


In Example 7-146, we create the Metro Mirror relationships MMREL1 and MMREL2, respectively for MM_DB_Pri and MM_DBLog_Pri. Also we make them members of the Metro Mirror consistency group CG_W2K3_MM. We use the svcinfo lsvdisk command to list all the VDisks in the ITSO-CLS1 cluster, and then use the svcinfo lsrcrelationshipcandidate command to show the VDisks in ITSO-CLS4. By using this command, we check the possible candidates for MM_DB_Pri. After checking all the above conditions, use the command svctask mkrcrelationship to create the Metro Mirror relationship. To verify the newly created Metro Mirror relationships, list them with the command svcinfo lsrcrelationship.
Example 7-146 Creating Metro Mirror relationships MMREL1 and MMREL2

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=MM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 13 MM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000010 0 1 empty 14 MM_Log_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000011 0 1 empty 15 MM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000012 0 1 empty IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate id vdisk_name 0 DB_Source 1 Log_Source 2 App_Source 3 App_Target_1 4 Log_Target_1 5 Log_Target_2 6 DB_Target_1 7 DB_Target_2 8 App_Source_SE 9 FC_A 13 MM_DB_Pri 14 MM_Log_Pri 15 MM_App_Pri IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master MM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_Log_Sec 2 MM_App_Sec IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1 RC Relationship, id [13], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2 416
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

RC Relationship, id [14], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 13 MMREL1 000002006AE04FC4 ITSO-CLS1 13 MM_DB_Pri 0000020063E03A38 ITSO-CLS4 0 MM_DB_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro 14 MMREL2 000002006AE04FC4 ITSO-CLS1 14 MM_Log_Pri 0000020063E03A38 ITSO-CLS4 1 MM_Log_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro

7.12.5 Creating stand-alone Metro Mirror relationship for MM_App_Pri


In Example 7-147, we create the stand-alone Metro Mirror relationship MMREL3 for MM_App_Pri. Once it is created, we check the status of this Metro Mirror relationship. Notice the state of MMREL3 is consistent_stopped. This is because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) VDisk is already synchronized with the primary (master) VDisk. Initial background synchronization is skipped when this option is used, even though they are not actually synchronized in this scenario. Here it is used to illustrate the option of pre-synchronized master and auxiliary VDisks, before the setting up of the relationship, and we have created the new relationship for MM_App_Sec using -sync option. Tip: The option -sync is only used when the target VDisk has already mirrored all the data from the source VDisk. By using this option, there will be no initial background copy between the primary VDisk and the secondary VDisk. MMREL2 and MMREL1 are in the inconsistent_stopped state, because they were not created with the -sync option, so their auxiliary VDisks need to be synchronized with their primary VDisks.
Example 7-147 Creating a stand-alone relationship and verifying it

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster ITSO-CLS4 -name MMREL3 RC Relationship, id [15], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master
Chapter 7. SVC operations using the CLI

417

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type metro sync in_sync copy_type metro

7.12.6 Starting Metro Mirror


Now that the Metro Mirror consistency group and relationships are in place, we are ready to use Metro Mirror relationships in our environment. When implementing Metro Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy for a dataset if a failure occurs that affects the production site. In the following section, we show how to stop and start stand-alone Metro Mirror relationships and consistency groups.

Starting a stand-alone Metro Mirror relationship


In Example 7-148, we start a stand-alone Metro Mirror relationship MMREL3. Because the Metro Mirror relationship was in the Consistent stopped state and no updates have been made to the primary VDisk, the relationship quickly enters the Consistent synchronized state.
Example 7-148 Starting the stand-alone Metro Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>

418

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.12.7 Starting a Metro Mirror consistency group


In Example 7-149, we start the Metro Mirror consistency group CG_W2K3_MM. Because the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all the relationships in the consistency group. Upon completion of the background copy, it enters the Consistent synchronized state.
Example 7-149 Starting the Metro Mirror consistency group.

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 IBM_2145:ITSO-CLS1:admin>

7.12.8 Monitoring the background copy progress


To monitor the background copy progress, we can use the svcinfo lsrcrelationship command. This command will show us all the defined Metro Mirror relationships if used without any arguments. In the command output, progress indicates the current background copy progress. Our Metro Mirror relationship is shown in Example 7-150. Note: Setting up SNMP traps for the SVC enables automatic notification when Metro Mirror consistency groups or relationships change state.
Example 7-150 Monitoring background copy progress example.

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1 id 13 name MMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 13 master_vdisk_name MM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4
Chapter 7. SVC operations using the CLI

419

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

aux_vdisk_id 0 aux_vdisk_name MM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50 progress 35 freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL2 id 14 name MMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 14 master_vdisk_name MM_Log_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 1 aux_vdisk_name MM_Log_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50 progress 37 freeze_time status online sync copy_type metro When all Metro Mirror relationships have completed the background copy, the consistency group enters the consistent synchronized state, as shown in Example 7-151.
Example 7-151 Listing the Metro Mirror consistency group.

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 420
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

RC_rel_id 14 RC_rel_name MMREL2

7.12.9 Stopping and restarting Metro Mirror


Now that the Metro Mirror consistency group and relationships are running, in this and the following sections, we describe how to stop, restart, and change the direction of the stand-alone Metro Mirror relationships, as well as the consistency group. In this section, we show how to stop and restart the stand-alone Metro Mirror relationships and the consistency group.

7.12.10 Stopping a stand-alone Metro Mirror relationship


Example 7-152, shows how to stop the stand-alone Metro Mirror relationship, while enabling access (write I/O) to both the primary and secondary VDisk, and the relationship enters the Idling state.
Example 7-152 Stopping stand-alone Metro Mirror relationship & enabling access to secondary VDisk

IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type metro

7.12.11 Stopping a Metro Mirror consistency group


Example 7-153, shows how to stop the Metro Mirror consistency group without specifying the -access flag. This means that the consistency group enters the Consistent stopped state.
Example 7-153 Stopping a Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4
Chapter 7. SVC operations using the CLI

421

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 If afterwards we want to enable access (write I/O) to the secondary VDisk, re-issue svctask stoprcconsistgrp, specifying the -access flag, and the consistency group transits to the Idling state, as shown in Example 7-154.
Example 7-154 Stopping a Metro Mirror consistency group and enabling access to the secondary

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2

7.12.12 Restarting a Metro Mirror relationship in the Idling state


When restarting a Metro Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or auxiliary VDisk, consistency will be compromised. Therefore, we must issue the command with the -force flag to restart a relationship, as shown in Example 7-155.
Example 7-155 Restarting a Metro Mirror relationship after updates in the Idling state

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3

422

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro

7.12.13 Restarting a Metro Mirror consistency group in the Idling state


When restarting a Metro Mirror consistency group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary VDisk in any of the Metro Mirror relationships in the consistency group, then consistency will be compromised. Therefore, we must use the -force flag to start a relationship. If the -force flag is not used, then the command will fail. In Example 7-156, we change the copy direction by specifying the auxiliary VDisks to be primaries.
Example 7-156 Restarting a Metro Mirror relationship while changing the copy direction

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2

Chapter 7. SVC operations using the CLI

423

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.12.14 Changing copy direction for Metro Mirror


In this section, we show how to change the copy direction of the stand-alone Metro Mirror relationships and the consistency group.

7.12.15 Switching copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the consistent synchronized state, we can change the copy direction for the relationship using the command svctask switchrcrelationship, specifying the primary VDisk. If the VDisk specified as the primary when issuing this command is already a primary, then the command has no effect. In Example 7-157, we change the copy direction for the stand-alone Metro Mirror relationship by specifying the auxiliary VDisk to be the primary. Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that is being transited from primary to secondary, since all I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcrelationship command.
Example 7-157 Switching the copy direction for a Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38

424

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro

7.12.16 Switching copy direction for a Metro Mirror consistency group


When a Metro Mirror consistency group is in the consistent synchronized state, we can change the copy direction for the consistency group, using the command svctask switchrcconsistgrp, specifying the primary VDisk. If the VDisk specified is already a primary when issuing this command, then the command has no effect. In Example 7-158, we change the copy direction for the Metro Mirror consistency group by specifying the auxiliary VDisk to become the primary. Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisks that transitions from primary to secondary, since all I/O will be inhibited when they become the secondary. Therefore, careful planning is required prior to using the svctask switchrcconsistgrp command.
Example 7-158 Switching the copy direction for a Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM
Chapter 7. SVC operations using the CLI

425

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2

7.12.17 Creating an SVC partnership between many clusters


Starting with SVC 5.1 it is possible to have a cluster partnership between many SVC clusters which allows us to create four different configurations using maximum of four clusters connected: Star Configuration Triangle Configuration Fully-Connected Configuration Daisy-Chaining Configuration In this section we will describe ho to configure SVC cluster partnership for each configuration. Important: In order to have a supported and working configuration all the SVC cluster must be at level 5.1 or above. In our scenarios we will configure the SVC partnership by referring to the clusters as A, B, C, and D: ITSO-CLS1 = A ITSO-CLS2 = B ITSO-CLS3 = C ITSO-CLS4 = D Example 7-159 shows the available clusters for a partnership using the lsclustercandidate command on each cluster.
Example 7-159 Available clusters

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS2:admin>svcinfo lsclustercandidate id configured cluster_name 426
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

000002006AE04FC4 no 0000020069E03A42 no 0000020063E03A38 no

ITSO-CLS1 ITSO-CLS3 ITSO-CLS4

IBM_2145:ITSO-CLS3:admin>svcinfo lsclustercandidate id configured name 000002006AE04FC4 no ITSO-CLS1 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2

7.12.18 Start Configuration partnership


Figure 7-4 on page 427 shows the Star Configuration.

Figure 7-4 Star Configuration

Example 7-160 shows the sequence of mkpartnership commands to be executed in order to get a Star Configuration.
Example 7-160 Star Configuration creation using the mkpartnership command

From ITSO-CLS1 to multiple clusters IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS3 to ITSO-CLS1
Chapter 7. SVC operations using the CLI

427

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS4 to ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsisrgrp we need, ensuring that a single VDisk is only in one relationship.

Triangle Configuration
Figure 7-5 on page 429 shows the Triangle Configuration.

428

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Figure 7-5 Triangle Configuration

Example 7-161 on page 429 shows the sequence of mkpartnership commands to be executed in order to get a Triangle Configuration.
Example 7-161 Triangle Configuration creation

From ITSO-CLS1 to ITSO-CLS2 and ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS1 and ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3

Chapter 7. SVC operations using the CLI

429

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA

After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsisrgrp we need, ensuring that a single VDisk is only in one relationship.

Fully-Connected Configuration
Figure 7-6 shows the Fully-Connected configuration.

Figure 7-6 Fully-Connected configuration

Example 7-162 shows the sequence of mkpartnership commands to be executed in order to get a Fully-Connected Configuration.
Example 7-162 Fully-Connected creation

From ITSO-CLS1 to ITSO-CLS2, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS3 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4

430

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

From ITSO-CLS4 to ITSO-CLS1, ITSO-CLS2 and ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsistgrp we need, ensuring that a single VDisk is only in one relationship.

Daisy-Chaining Configuration
Figure 7-7 on page 432 shows the Daisy-Chaining configuration.

Chapter 7. SVC operations using the CLI

431

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 7-7 Daisy-chaining configuration.

Example 7-163 shows the sequence of mkpartnership commands to be executed in order to get a Daisy-Chaining Configuration.
Example 7-163 Daisy-Chaining configuration creation

From ITSO-CLS1 to ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS2 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS4 to ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA

From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias

432

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA From ITSO-CLS4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38

After the SVC partnership has been configured it is possible to configure any rcrelationship or rcconsisrgrp we need, ensuring that a single VDisk is only in one relationship.

7.13 Global Mirror operation


In the following scenario, we will be setting up an intercluster Global Mirror relationship between the SVC cluster ITSO-CLS1 at the primary site and SVC cluster ITSO-CLS2 at the secondary site. Note: This example is for intercluster only. In case you wish to set up intracluster, we highlight those parts in the following procedure that you do not need to perform. The details of the VDisks are shown in Table 7-4.
Table 7-4 Details of VDisks for Global Mirror relationship scenario Content of Vdisk Database Files Database Log Files Application Files VDisks at primary site GM_DB_Pri GM_DBLog_Pri GM_App_Pri VDisks at secondary site GM_DB_Sec GM_DBLog_Sec GM_App_Sec

Since data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a consistency group to handle Global Mirror relationships for them. While in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. The Global Mirror relationship setup is illustrated in Figure 7-8.

Chapter 7. SVC operations using the CLI

433

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Primary Site SVC Cluster - ITSO - CLS1

Secondary Site SVC Cluster - ITSO - CLS4

Consistency Group
CG_W2K3_GM

GM_DB_Pri

GM Relationship 1

GM_DB_Sec

GM_Dlog_Pri

GM Relationship 2

GM_DBlog_Sec

GM_App_Pri

GM Relationship 3

GM_App_Sec

Figure 7-8 Global Mirror scenario

7.13.1 Setting up Global Mirror


In the following section, we assume that the source and target VDisks have already been created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate. To set up the Global Mirror, the following steps must be performed: 1. Create a SVC partnership between ITSO_CLS1 and ITSO_CLS2, on both SVC clusters: Bandwidth 10 MBps 2. Create a Global Mirror consistency group: Name CG_W2K3_GM 3. Create the Global Mirror relationship for GM_DB_Pri: Master GM_DB_Pri Auxiliary GM_DB_Sec Auxiliary SVC cluster ITSO-CLS2 Name GMREL1 Consistency group CG_W2K3_GM 4. Create the Global Mirror relationship for GM_DBLog_Pri: Master GM_DBLog_Pri Auxiliary GM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS2 Name GMREL2 Consistency group CG_W2K3_GM 5. Create the Global Mirror relationship for GM_App_Pri: Master GM_App_Pri Auxiliary GM_App_Sec Auxiliary SVC cluster ITSO-CLS2 Name GMREL3 In the following sections, each step is carried out using the CLI.

434

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4


We create an SVC partnership between both clusters. Note: If you are creating an intracluster Global Mirror, do not perform the next step; instead, go to Changing link tolerance and cluster delay simulation on page 436.

Pre-verification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. Example 7-164 confirms that our clusters are communicating, as ITSO-CLS4 is an eligible SVC cluster candidate, at ITSO-CLS1, for the SVC cluster partnership and vice versa. This confirms that both clusters are communicating with each other.
Example 7-164 Listing the available SVC clusters for partnership

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured 0000020068603A42 no cluster_name ITSO-CLS4

IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured 0000020060C06FCA no cluster_name ITSO-CLS1

In Example 7-142, we show the output of svcinfo lscluster, before setting up the SVC clusters partnership for Global Mirror. It is shown for comparison after we have set up the SVC partnership.
Example 7-165 Pre-verification of cluster configuration

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA IBM_2145:ITSO-CLS2:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03 A38

Partnership between clusters


In Example 7-166, we create the partnership from ITSO-CLS1 to ITSO-CLS4, specifying 10 MBps bandwidth to be used for the background copy. To verify the status of the newly created partnership, we issue the command svcinfo lscluster. Notice that the new partnership is only partially configured. It will remain partially configured until we run mkpartnership on the other cluster.

Chapter 7. SVC operations using the CLI

435

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 7-166 Creating the partnership from ITSO-CLS1 to ITSO-CLS2 and verifying the partnership

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote partially_configured_local 10 0000020063E03A38 In Example 7-167, we create the partnership from ITSO-CLS4 back to ITSO-CLS1, specifying 10 MBps bandwidth to be used for the background copy. After creating the partnership, verify that the partnership is fully configured by re-issuing the svcinfo lscluster command.
Example 7-167 Creating the partnership from ITSO-CLS2 to ITSO-CLS1 and verify partnership

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 10 000002006AE04FC4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote 0000020063E03A38

partnership

bandwidth

fully_configured 10

7.13.3 Changing link tolerance and cluster delay simulation


The gm_link_tolerance defines how sensitive the SVC is to inter-link overload conditions. The value is the number of seconds of continuous link difficulties that will be tolerated before the SVC will stop the remote copy relationships in order to prevent impacting host I/O at the primary site. In order to change the value, use the following command: svctask chcluster -gmlinktolerance link_tolerance The link_tolerance value is between 60 and 86400 seconds in increments of 10 seconds. The default value for the link tolerance is 300 seconds. A value of 0 causes link tolerance to be disabled. Recommendation: We strongly recommend that you use the default value. If the link is overloaded for a period, which would impact host I/O at the primary site, the relationships will be stopped to protect those hosts.

Intercluster and intracluster delay simulation


This Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This feature allows testing to be performed that detects colliding writes, and so can be used to test 436
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

an application before full deployment of the Global Mirror feature. The delay simulation can be enabled separately for each of intracluster or intercluster Global Mirror. To enable this feature, you need to run the following command either for the intracluster or intercluster simulation: For intercluster: svctask chcluster -gminterdelaysimulation <inter_cluster_delay_simulation> For intracluster: svctask chcluster -gmintradelaysimulation <intra_cluster_delay_simulation> inter_cluster_delay_simulation and intra_cluster_delay_simulation express the amount of time (in milliseconds) secondary I/Os are delayed respectively for intercluster and intracluster relationships. These values specify the number of milliseconds that I/O activity, that is, copying a primary VDisk to a secondary VDisk, is delayed. A value from 0 to 100 milliseconds in 1 millisecond increments can be set for the cluster_delay_simulation in the commands above. A value of zero disables the feature. To check the current settings for the delay simulation, use the following command: svcinfo lscluster <clustername> In Example 7-168, we show the modification of the delay simulation value and a change of the Global Mirror link tolerance parameters. We also show the changed values of the Global Mirror link tolerance and delay simulation parameters.
Example 7-168 Delay simulation and link tolerance modification

IBM_2145:ITSO-CLS1:admin>svctask chcluster IBM_2145:ITSO-CLS1:admin>svctask chcluster IBM_2145:ITSO-CLS1:admin>svctask chcluster IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id 000002006AE04FC4 name ITSO-CLS1 location local partnership bandwidth total_mdisk_capacity 160.0GB space_in_mdisk_grps 160.0GB space_allocated_to_vdisks 19.00GB total_free_space 141.0GB statistics_status off statistics_frequency 15 required_memory 8192 cluster_locale en_US time_zone 520 US/Pacific code_level 5.1.0.0 (build 17.1.0908110000) FC_port_speed 2Gb console_IP id_alias 000002006AE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 email_reply email_contact email_contact_primary email_contact_alternate

-gminterdelaysimulation 20 -gmintradelaysimulation 40 -gmlinktolerance 200 000002006AE04FC4

Chapter 7. SVC operations using the CLI

437

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

email_contact_location email_state invalid inventory_mail_interval 0 total_vdiskcopy_capacity 19.00GB total_used_capacity 19.00GB total_overallocation 11 total_vdisk_capacity 19.00GB cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no relationship_bandwidth_limit 25

7.13.4 Creating a Global Mirror consistency group


In Example 7-169, we create the Global Mirror consistency group using the svctask mkrcconsistgrp command. This consistency group will be used for the Global Mirror relationships for the database VDisks and is named CG_W2K3_GM.
Example 7-169 Creating the Global Mirror consistency group CG_W2K3_GM

IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_GM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type 0 CG_W2K3_GM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group

7.13.5 Creating Global Mirror relationships


In Example 7-171 on page 440, we create the Global Mirror relationships GMREL1 and GMREL2 for VDisks GM_DB_Pri and GM_DBLog_Pri, respectively. We also make them members of the Global Mirror consistency group CG_W2K3_GM. We use svcinfo lsvdisk to list all the VDisks in the ITSO-CLS1 cluster, and then use the svcinfo lsrcrelationshipcandidate command to show the possible VDisks candidates for GM_DB_Pri in ITSO-CLS4. After checking all of the above conditions, use the command svctask mkrcrelationship to create the Global Mirror relationship. To verify, the newly created Global Mirror relationships, list them with the command svcinfo lsrcrelationship.

438

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Example 7-170 Creating Global Mirror GMREL1and GMREL2 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=GM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 16 GM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000013 0 1 empty 17 GM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000014 0 1 empty 18 GM_DBLog_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000015 0 1 empty IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master GM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_Log_Sec 2 MM_App_Sec 3 GM_App_Sec 4 GM_DB_Sec 5 GM_DBLog_Sec 6 SEV IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [10], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [17], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [18], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 17 GMREL1 000002006AE04FC4 ITSO-CLS1 17 GM_DB_Pri 0000020063E03A38 ITSO-CLS4 4 GM_DB_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global 18 GMREL2 000002006AE04FC4 ITSO-CLS1 18 GM_DBLog_Pri 0000020063E03A38 ITSO-CLS4 5 GM_DBLog_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global

7.13.6 Creating the Stand-alone Global Mirror relationship for GM_App_Pri


In Example 7-171, we create the stand-alone Global Mirror relationship GMREL3 for GM_App_Pri. Once it is created, we will check the status of each of our Global Mirror relationships. Notice that the status of GMREL3 is consistent_stopped, and this is because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) VDisk is

Chapter 7. SVC operations using the CLI

439

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

already synchronized with the primary (master) virtual disk. The initial background synchronization is skipped when this option is used. GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created with the -sync option, so their auxiliary VDisks needs to be synchronized with their primary VDisks.
Example 7-171 Creating a stand-alone Global Mirror relationship and verifying it IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO-CLS4 -sync -name GMREL3 -global RC Relationship, id [16], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship -delim : id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_ name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority :progress:copy_type 16:GMREL3:000002006AE04FC4:ITSO-CLS1:16:GM_App_Pri:0000020063E03A38:ITSO-CLS4:3:GM_App_Sec:master:::consist ent_stopped:50:100:global 17:GMREL1:000002006AE04FC4:ITSO-CLS1:17:GM_DB_Pri:0000020063E03A38:ITSO-CLS4:4:GM_DB_Sec:master:0:CG_W2K3_G M:inconsistent_stopped:50:0:global 18:GMREL2:000002006AE04FC4:ITSO-CLS1:18:GM_DBLog_Pri:0000020063E03A38:ITSO-CLS4:5:GM_DBLog_Sec:master:0:CG_ W2K3_GM:inconsistent_stopped:50:0:global

7.13.7 Starting Global Mirror


Now that we have created the Global Mirror consistency group and relationships, we are ready to use the Global Mirror relationships in our environment. When implementing Global Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy in case a hardware failure occurs that affects the SAN at the production site. In this section, we show how to start the stand-alone Global Mirror relationships and the consistency group.

7.13.8 Starting a stand-alone Global Mirror relationship


In Example 7-148, we start the stand-alone Global Mirror relationship GMREL3. Because the Global Mirror relationship was in the Consistent stopped state and no updates have been made to the primary VDisk, the relationship quickly enters the Consistent synchronized state.
Example 7-172 Starting the stand-alone Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id

440

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global

7.13.9 Starting a Global Mirror consistency group


In Example 7-149 on page 419, we start the Global Mirror consistency group CG_W2K3_GM. Because the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all relationships in the consistency group. Upon completion of the background copy, it enters the Consistent synchronized state (see Example 7-173).
Example 7-173 Starting the Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

7.13.10 Monitoring background copy progress


To monitor the background copy progress, use the svcinfo lsrcrelationship command. This command will show us all the defined Global Mirror relationships if used without any parameters. In the command output, progress indicates the current background copy progress. Our Global Mirror relationships are shown in Example 7-150. Note: Setting up SNMP traps for the SVC enables automatic notification when Global Mirror consistency groups or relationships change state.

Chapter 7. SVC operations using the CLI

441

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 7-174 Monitoring background copy progress example

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL1 id 17 name GMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 17 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 4 aux_vdisk_name GM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 38 freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL2 id 18 name GMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 18 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 5 aux_vdisk_name GM_DBLog_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 40 freeze_time status online sync copy_type global When all the Global Mirror relationships complete the background copy, the consistency group enters the consistent synchronized state, as shown in Example 7-151.
Example 7-175 Listing the Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38

442

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

7.13.11 Stopping and restarting Global Mirror


Now that the Global Mirror consistency group and relationships are running, we now describe how to stop, restart and also change the direction of the stand-alone Global Mirror relationships as well as the consistency group. First, we show how to stop and restart the stand-alone Global Mirror relationships and the consistency group.

7.13.12 Stopping a stand-alone Global Mirror relationship


In Example 7-152, we stop the stand-alone Global Mirror relationship, while enabling access (write I/O) to both the primary and the secondary VDisk, and as a result, the relationship enters the Idling state.
Example 7-176 Stopping the stand-alone Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type global

Chapter 7. SVC operations using the CLI

443

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.13.13 Stopping a Global Mirror consistency group


In Example 7-153, we stop the Global Mirror consistency group without specifying the -access parameter. This means that the consistency group enters the Consistent stopped state.
Example 7-177 Stopping a Global Mirror consistency group without -access

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 If, afterwards, we want to enable access (write I/O) for the secondary VDisk, we can re-issue the svctask stoprcconsistgrp command, specifying the -access parameter, and the consistency group transits to the Idling state, as shown in Example 7-154.
Example 7-178 Stopping a Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

444

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.13.14 Restarting a Global Mirror relationship in the Idling state


When restarting a Global Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary VDisk, consistency will be compromised. Therefore, we must issue the -force parameter to restart the relationship. If the -force parameter is not used, then the command will fail. This is shown in Example 7-155.
Example 7-179 Restarting a Global Mirror relationship after updates in the Idling state

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global

7.13.15 Restarting a Global Mirror consistency group in the Idling state


When restarting a Global Mirror consistency group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary VDisk in any of the Global Mirror relationships in the consistency group, then consistency will be compromised. Therefore, we must issue the -force parameter to start the relationship. If the -force parameter is not used, then the command will fail. In Example 7-156, we restart the consistency group and change the copy direction by specifying the auxiliary VDisks to be the primaries.
Example 7-180 Restarting a Global Mirror relationship while changing the copy direction

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38
Chapter 7. SVC operations using the CLI

445

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

7.13.16 Changing direction for Global Mirror


In this section, we show how to change the copy direction of the stand-alone Global Mirror relationships and the consistency group.

7.13.17 Switching copy direction for a Global Mirror relationship


When a Global Mirror relationship is in the consistent synchronized state, we can change the copy direction for the relationship, using the command svctask switchrcrelationship, specifying the primary VDisk. If the VDisk specified as the primary when issuing this command is already a primary, then the command has no effect. In Example 7-157, we change the copy direction for the stand-alone Global Mirror relationship, specifying the auxiliary VDisk to be the primary. Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transits from primary to secondary, since all I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcrelationship command.
Example 7-181 Switching the copy direction for a Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress

446

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global

7.13.18 Switching copy direction for a Global Mirror consistency group


When a Global Mirror consistency group is in the consistent synchronized state, we can change the copy direction for the relationship using the command svctask switchrcconsistgrp, specifying the primary VDisk. If the VDisk specified as the primary when issuing this command is already a primary, then the command has no effect. In Example 7-158, we change the copy direction for the Global Mirror consistency group, specifying the auxiliary to become the primary. Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisks that transits from primary to secondary, since all I/O will be inhibited when they become the secondary. Therefore, careful planning is required prior to using the svctask switchrcconsistgrp command.
Example 7-182 Switching the copy direction for a Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master
Chapter 7. SVC operations using the CLI

447

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

7.14 Service and maintenance


This section details the various service and maintenance tasks that you can execute within the SVC environment.

7.14.1 Upgrading software


This section explains how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is four positive integers separated by periods. For example, a software upgrade package contains something similar to 5.1.0.0, and each software package is given a unique number. Note: It is mandatory to be running on SVC 4.3.1.7 cluster code before upgrading to SVC 5.1.0.0 cluster code. Check the recommended software levels on the Web at: http://www.ibm.com/storage/support/2145

448

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

SVC Software Upgrade Test Utility


This utility, which resides on the master console, will check software levels in the system against recommended levels, which will be documented on the support Web site. You will be informed if the software levels are up-to-date, or if you need to download and install newer levels. The utility and installation instructions can be downloaded from this link: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 After the software file has been uploaded to the cluster (to the /home/admin/upgrade directory), it can be selected and applied to the cluster. This is performed by the Web script by using the svctask applysoftware command. When a new code level is applied, it is automatically installed on all the nodes within the cluster. The underlying command-line tool runs the script sw_preinstall, which checks the validity of the upgrade file, and whether it can be applied over the current level. If the upgrade file is unsuitable, the pre install script deletes the files. This prevents the build up of invalid files on the cluster.

Precaution before upgrade


Software installation is normally considered to be a customer task. The SVC supports concurrent software upgrade. That is to say, that software upgrade can be performed concurrently with I/O user operations and some management activities, but only limited CLI commands will be operational from the time the install command is started until the upgrade operation has either terminated successfully or been backed-out. Some commands will fail with a message indicating that a software upgrade is in progress: Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs are working. Otherwise, the applications might have I/O failures during the software upgrade. You can do that using the SDD command, as shown in Example 7-183.
Example 7-183 query adapter

#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 1445 1888 Errors 0 0 Paths 4 4 Active 4 4

#datapath query device Total Devices : 2 DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000000 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 OPEN NORMAL 0 0 1 fscsi1/hdisk7 OPEN NORMAL 972 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 784 0 1 fscsi1/hdisk8 OPEN NORMAL 0 0

Chapter 7. SVC operations using the CLI

449

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: During a software upgrade, there are periods where not all of the nodes in the cluster are operational, and as a result the cache operates in write through mode. This will have an impact upon throughput, latency, and bandwidth aspects of performance. It is also worth double checking that your UPS power configuration is also set up correctly (even if your cluster is running without problems). Specifically, make sure: That your UPSs are all getting their power from an external source, and that they are not daisy chained. In other words, make sure that each UPS is not supplying power to another nodes UPS. That the power cable, and the serial cable coming from each node, go back to the same UPS. If the cables are crossed, and are going back to a different UPS, then during the upgrade, as one node is shut down, another node might also be mistakenly shut down. Important: Do not share the SVC UPS with any other devices. You must also ensure that all I/O paths are working for each host that is running I/O operations to the SAN during the software upgrade. You can check the I/O paths by using datapath query commands. You do not need to check for hosts that have no active I/O operations to the SAN during the software upgrade.

Procedure
To upgrade the SVC cluster software, perform the following steps: 1. Before starting the upgrade, you must back up the configuration (see 7.14.9, Backing up the SVC cluster configuration on page 464) and save the backup config file in a safe place. 2. Also, save the data collection for support diagnosis just in case of problems, as shown in Example 7-184.
Example 7-184 svc_snap

IBM_2145:ITSO-CLS1:admin>svc_snap Collecting system information... Copying files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Dumping error log... Creating snap package... Snap data collected in /dumps/snap.104643.080617.002427.tgz 3. List the dump generated by the previous command, as shown in Example 7-185.
Example 7-185 svcinfo ls2145dumps

IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumps id 2145_filename 0 svc.config.cron.bak_node3 1 svc.config.cron.bak_SVCNode_2 450


SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

svc.config.cron.bak_node1 dump.104643.070803.015424 dump.104643.071010.232740 svc.config.backup.bak_ITSOCL1_N1 svc.config.backup.xml_ITSOCL1_N1 svc.config.backup.tmp.xml svc.config.cron.bak_ITSOCL1_N1 dump.104643.080609.202741 104643.080610.154323.ups_log.tar.gz 104643.trc.old dump.104643.080609.212626 104643.080612.221933.ups_log.tar.gz svc.config.cron.bak_Node1 svc.config.cron.log_Node1 svc.config.cron.sh_Node1 svc.config.cron.xml_Node1 dump.104643.080616.203659 104643.trc ups_log.a snap.104643.080617.002427.tgz ups_log.b

4. Save the generated dump in a safe place using the pscp command, as shown in Example 7-186.
Example 7-186 pscp -load

C:\>pscp -load ITSOCL1 admin@9.43.86.117:/dumps/snap.104643.080617.002427.tgz c:\ snap.104643.080617.002427 | 597 kB | 597.7 kB/s | ETA: 00:00:00 | 100% 5. Upload the new software package using PuTTY Secure Copy. Enter the command as shown in Example 7-187.
Example 7-187 pscp -load

C:\>pscp -load ITSOCL1 IBM2145_INSTALL_4.3.0.0 admin@9.43.86.117:/home/admin/upgrade IBM2145_INSTALL_4.3.0.0-0 | 103079 kB | 9370.8 kB/s | ETA: 00:00:00 | 100% 6. Upload the SAN Volume Controller Software Upgrade Test Utility using PuTTY Secure Copy. Enter the command as shown in Example 7-188.
Example 7-188 Upload utility

C:\>pscp -load ITSOCL1 IBM2145_INSTALL_svcupgradetest_1.11 admin@9.43.86.117:/home/admin/upgrade IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%

7. Check that the packages were successfully delivered through the PuTTY command-line application by entering the svcinfo lssoftwaredumps command, as shown in Example 7-189.
Example 7-189 svcinfo lssoftwaredumps

IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumps id software_filename

Chapter 7. SVC operations using the CLI

451

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

0 1

IBM2145_INSTALL_4.3.0.0 IBM2145_INSTALL_svcupgradetest_1.11

8. Now that the packages are uploaded, first install the SAN Volume Controller Software Upgrade Test Utility, as shown in Example 7-190.
Example 7-190 svctask applysoftware

IBM_2145:ITSO-CLS1:admin>svctask applysoftware -file IBM2145_INSTALL_svcupgradetest_1.11 CMMVC6227I The package installed successfully. 9. Using the following command, test the upgrade for known issues that may prevent a software upgrade from completing successfully, as shown in Example 7-191.
Example 7-191 svcupgradetest

IBM_2145:ITSO-CLS1:admin>svcupgradetest svcupgradetest version 1.11. Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade. Important: If the above command produces any errors, troubleshoot the error using the maintenance procedures before continuing further. 10.Now use the svctask command set to apply the software upgrade, as shown in Example 7-192.
Example 7-192 Apply upgrade command example

IBM_2145:ITSOSVC42A:admin>svctask applysoftware -file IBM2145_INSTALL_4.3.0.0 While the upgrade is running, you can check the status, as shown in Example 7-193.
Example 7-193 Check update status

IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwareupgradestatus status upgrading 11.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted in turn. If a node does not restart automatically during the upgrade, it will have to be repaired manually. Note: If you are using SSD the data of the SSD within the restarted node will not be available during the reboot. 12.Eventually both nodes should display Cluster: on line one on the SVC front panel and the name of your cluster on line 2. Be prepared for a long wait (in our case, we waited approximately 40 minutes).

452

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Note: During this process, both your CLI and GUI vary from sluggish (very slow) to unresponsive. The important thing is that I/O to the hosts can continue. 13.To verify that the upgrade was successful, you can perform either of the following options: Run the svcinfo lscluster and svcinfo lsnodevpd commands, as shown in Example 7-194. We have truncated the lscluster and lsnodevpd information for this example.
Example 7-194 svcinfo lscluster and lsnodevpd commands

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 id 0000020060806FCA name ITSO-CLS1 location local partnership bandwidth cluster_IP_address 9.43.86.117 cluster_service_IP_address 9.43.86.118 total_mdisk_capacity 756.0GB space_in_mdisk_grps 756.0GB space_allocated_to_vdisks 156.00GB total_free_space 600.0GB statistics_status off statistics_frequency 15 required_memory 8192 cluster_locale en_US SNMP_setting none SNMP_community SNMP_server_IP_address 0.0.0.0 subnet_mask 255.255.252.0 default_gateway 9.43.85.1 time_zone 522 UTC email_setting email_id code_level 4.3.0.0 (build 8.15.0806110000) FC_port_speed 2Gb console_IP 9.43.86.115:9080 id_alias 0000020060806FCA gm_link_tolerance 300 gm_inter_cluster_delay_simulation 0 gm_intra_cluster_delay_simulation 0 email_server 127.0.0.1 email_server_port 25 email_reply itsotest@ibm.com email_contact ITSO User email_contact_primary 555-1234 email_contact_alternate email_contact_location ITSO email_state running email_user_count 1 inventory_mail_interval 0 cluster_IP_address_6 cluster_service_IP_address_6 prefix_6

Chapter 7. SVC operations using the CLI

453

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

default_gateway_6 total_vdiskcopy_capacity 156.00GB total_used_capacity 156.00GB total_overallocation 20 total_vdisk_capacity 156.00GB IBM_2145:ITSO-CLS1:admin>

IBM_2145:ITSO-CLS1:admin>svcinfo lsnodevpd 1 id 1 system board: 24 fields part_number 31P0906 system_serial_number 13DVT31 number_of_processors 4 number_of_memory_slots 8 number_of_fans 6 number_of_FC_cards 1 number_of_scsi/ide_devices 2 BIOS_manufacturer IBM BIOS_version -[GFE136BUS-1.09]BIOS_release_date 02/08/2008 system_manufacturer IBM system_product IBM System x3550 -[21458G4]. . software: 6 fields code_level 4.3.0.0 (build 8.15.0806110000) node_name Node1 ethernet_status 1 WWNN 0x50050768010037e5 id 1 Copy the error log to your management workstation, as explained in 7.14.2, Running maintenance procedures on page 454. Open it in WordPad and search for Software Install completed. You have now completed the tasks required to upgrade the SVC software.

7.14.2 Running maintenance procedures


Use the svctask finderr command to generate a list of any unfixed errors in the system. This command analyzes the last generated log that resides in the /dumps/elogs/ directory on the cluster. If you want to generate a new log before analyzing unfixed errors, run the svctask dumperrlog command (Example 7-195).
Example 7-195 svctask dumperrlog

IBM_2145:ITSO-CLS2:admin>svctask dumperrlog This generates a file called errlog_timestamp, such as errlog_100048_080618_042419, where: errlog is part of the default prefix for all error log files.

454

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

100048 is the panel name of the current configuration node. 080618 is the date (YYMMDD). 042419 is the time (HHMMSS). You can add the -prefix parameter to your command to change the default prefix of errlog to something else (Example 7-196).
Example 7-196 svctask dumperrlog -prefix

IBM_2145:ITSO-CLS2:admin>svctask dumperrlog -prefix svcerrlog This command creates a file called svcerrlog_timestamp. To see what the file name is, you must enter the following command (Example 7-197).
Example 7-197 svcinfo lserrlogdumps

IBM_2145:ITSO-CLS2:admin>svcinfo lserrlogdumps id filename 0 errlog_100048_080618_042049 1 errlog_100048_080618_042128 2 errlog_100048_080618_042355 3 errlog_100048_080618_042419 4 errlog_100048_080618_175652 5 errlog_100048_080618_175702 6 errlog_100048_080618_175724 7 errlog_100048_080619_205900 8 errlog_100048_080624_170214 9 svcerrlog_100048_080624_170257 Note: A maximum of ten error log dump files per node will be kept on the cluster. When the eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note that the directory might also hold log files retrieved from other nodes. These files are not counted. The SVC will delete the oldest file (when necessary) for this node in order to maintain the maximum number of files. The SVC will not delete files from other nodes unless you issue the cleandumps command. After you generate your error log, you can issue the svctask finderr command to scan it for any unfixed errors, as shown in Example 7-198.
Example 7-198 svctask finderr

IBM_2145:ITSO-CLS2:admin>svctask finderr Highest priority unfixed error code is [1230] As you can see, we have one unfixed error on our system. To get it analyzed, you need to download it onto your own PC. To know more about this unfixed error, you need to look at the error log in more detail. Use the PuTTY Secure Copy process to copy the file from the cluster to your local management workstation, as shown in Example 7-199 on page 455.
Example 7-199 pscp command: Copy error logs off SVC

In W2K3 Start Run cmd

Chapter 7. SVC operations using the CLI

455

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

C:\Program Files\PuTTY>pscp -load SVC_CL2 admin@9.43.86.119:/dumps/elogs/svcerrlog_100048_080624_170257 c:\temp\svcerrlog.txt svcerrlog.txt | 6390 kB | 3195.1 kB/s | ETA: 00:00:00 | 100% In order to use the Run option, you must know where your pscp.exe is located. In this case, it is in C:\Program Files\PuTTY\. This command copies the file called svcerrlog_100048_080624_170257 to the C:\temp directory on our local workstation and calls the file svcerrlog.txt. Open the file in WordPad (Notepad does not format the screen as well). You should see information similar to what is shown in Example 7-200. The list was truncated for the purposes of this example.
Example 7-200 errlog in WordPad

Error Log Entry 400 Node Identifier Object Type Object ID Copy ID Sequence Number Root Sequence Number First Error Timestamp Last Error Timestamp Error Count Error ID Error Code Status Flag Type Flag 03 33 33 04 00 00 00 00 00 44 00 00 00 00 00 00 00 17 33 04 00 00 00 00 00 B8 00 00 00 00 00 00 03 A0 05 00 00 00 00 00 00 00 00 00 00 00 00 00 00 05 0B 01 00 00 00 00 00 20 00 00 00 00 00 00

: : : : : : : : : : : : : : : 31 00 00 00 00 00 00 00

Node2 device 0 37404 37404 Sat Jun 21 00:08:21 2008 Epoch + 1214006901 Sat Jun 21 00:11:36 2008 Epoch + 1214007096 2 10013 : Login Excluded 1230 : Login excluded UNFIXED TRANSIENT ERROR 44 11 00 00 00 00 00 00 17 01 01 00 00 00 00 00 B8 00 00 00 00 00 00 00 A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04 01 01 00 00 00 00 00 20 00 00 00 00 00 00 00

Scrolling through, or searching for the term unfixed, you should find more detail about the problem. There can be more entries in the errorlog that have the status of unfixed. After you take the necessary steps to rectify the problem, you can mark the error as fixed in the log by issuing the svctask cherrstate command against its sequence numbers (Example 7-201).
Example 7-201 svctask cherrstate

IBM_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37404 If you accidentally mark the wrong error as fixed, you can mark it as unfixed again by entering the same command and appending the -unfix flag to the end, as shown in Example 7-202. 456
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Example 7-202 unfix

IBM_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37406 -unfix

7.14.3 Setting up SNMP notification


To set up error notification, use the svctask mksnmpserver command. An example of the mksnmpserver command is shown in Example 7-203.
Example 7-203 svctask mksnmpserver

IBM_2145:ITSO-CLS2:admin>svctask mksnmpserver -error on -warning on -info on 9.43.86.160 -community SVC SNMP Server id [1] successfully created

-ip

This command sends all errors, warning and informational events to the SVC community on the SNMP manager with the IP address 9.43.86.160.

7.14.4 Set Syslog Event Notification


Starting with SVC 5.1 it is possible to save a Syslog to a defined Syslog Server. The SVC provides now support for Syslog in addition to Email and SNMP traps. The syslog protocol is a client-server standard for forwarding log messages from a sender to a receiver on an IP network. Syslog can be used to integrate log messages from different types of systems into a central repository. SVC 5.1 can be configured to send information to 6 syslog servers. When configuring the SVC using the CLI, it is done as shown in Example 7-204 using the command svctask mksyslogserver. Using this command with the -h parameter will give you information on all the options available. In our example we only configure the SVC to use the default values for our syslog server.
Example 7-204 Configuring the Syslog

IBM_2145:ITSO-CLS2:admin>svctask mksyslogserver -ip 10.64.210.231 -name Syslogserv1 Syslog Server id [1] successfully created When we have configured our syslog server we can display what syslog servers have currently have configured in our cluster as shown in Example 7-205.
Example 7-205 svcinfo lssyslogserver

IBM_2145:ITSO-CLS2:admin>svcinfo lssyslogserver id name IP_address facility error warning info 0 Syslogsrv 10.64.210.230 on on on 1 Syslogserv1 10.64.210.231 on on on

4 0

Chapter 7. SVC operations using the CLI

457

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7.14.5 Configuring error notification using an email server


The SAN Volume Controller can use an e-mail server to send event notification and inventory e-mails to e-mail users. It can transmit any combination of error, warning, and informational notification types. The SAN Volume Controller supports up to six e-mail servers to provide redundant access to the external e-mail network. The e-mail servers are used in turn until the e-mail is successfully sent from the SAN Volume Controller. Note: Before the SVC can start sending emails we need to run the command svctask startemail which enables this service.l The attempt is successful when the SAN Volume Controller gets a positive acknowledgement from an e-mail server that the e-mail has been received by the server. If no port is specified then port 25 is the default port as shown in Example 7-206.
Example 7-206 The mkemailserver command syntax

IBM_2145:ITSO-CLS1:admin>svctask mkemailserver -ip 192.168.1.1 Email Server id [0] successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsemailserver 0 id 0 name emailserver0 IP_address 192.168.1.1 port 25 We can configure an email user that will receive email notifications from the SVC cluster. We can have 12 users defined to receive emails from our SVC. Using the command svcinfo lsemailuser we can verify who is already registered an what kind of information is sent to that user as shown in Example 7-207.
Example 7-207 .svcinfo lsemailuser

IBM_2145:ITSO-CLS2:admin>svcinfo lsemailuser id name address user_type error warning info 0 IBM_Support_Center callhome0@de.ibm.com support on off off

inventory on

We can also create a new user as shown in Example 7-208 for a SAN administrator.
Example 7-208 svctask mkemailuser

IBM_2145:ITSO-CLS2:admin>svctask mkemailuser -address SANadmin@ibm.com -error on -warning on -info on -inventory on User, id [1], successfully created

7.14.6 Analyzing the error log


The following types of events and errors are logged in the error log: Events: State changes that are detected by the cluster software and that are logged for informational purposes. Events are recorded in the cluster error log. Errors: Hardware or software problems that are detected by the cluster software and that require some repair. Errors are recorded in the cluster error log. 458
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Unfixed errors: Errors that were detected and recorded in the cluster error log and that have not yet been corrected or repaired. Fixed errors: Errors that were detected and recorded in the cluster error log and that have subsequently been corrected or repaired. To display the error log, use the svcinfo lserrlog or svcinfo caterrlog commands, as shown in Example 7-209 (the output is the same).
Example 7-209 svcinfo caterrlog command

IBM_2145:ITSOSVC42A:admin>svcinfo caterrlog -delim : id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_ number:first_timestamp:last_timestamp:number_of_errors:error_code 0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101 0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101 12:_grp:no:no:5:SVCNode_1:0:0:070606094858:070606094858:1:00990145 12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094539:070606094539:1:00990173 0:internal:no:no:5:SVCNode_1:0:0:070606094507:070606094507:1:00990219 12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094208:070606094208:1:00990148 12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094139:070606094139:1:00990145 ......... IBM_2145:ITSO-CLS1:admin>svcinfo caterrlog -delim , id,type,fixed,SNMP_trap_raised,error_type,node_name,sequence_number,root_sequence_ number,first_timestamp,last_timestamp,number_of_errors,error_code,copy_id 0,cluster,no,yes,6,n4,171,170,080624115947,080624115947,1,00981001, 0,cluster,no,yes,6,n4,170,170,080624115932,080624115932,1,00981001, 0,cluster,no,no,5,n1,0,0,080624105428,080624105428,1,00990101, 0,internal,no,no,5,n1,0,0,080624095359,080624095359,1,00990219, 0,internal,no,no,5,n1,0,0,080624094301,080624094301,1,00990220, 0,internal,no,no,5,n1,0,0,080624093355,080624093355,1,00990220, 11,vdisk,no,no,5,n1,0,0,080623150020,080623150020,1,00990183, 4,vdisk,no,no,5,n1,0,0,080623145958,080623145958,1,00990183, 5,vdisk,no,no,5,n1,0,0,080623145934,080623145934,1,00990183, 11,vdisk,no,no,5,n1,0,0,080623145017,080623145017,1,00990182, 6,vdisk,no,no,5,n1,0,0,080623144153,080623144153,1,00990183, . This command views the error log that was last generated. Use the method described in 7.14.2, Running maintenance procedures on page 454 to upload and analyze the error log in more detail. To clear the error log, you can issue the svctask clearerrlog command, as shown in Example 7-210.
Example 7-210 svctask clearerrlog

IBM_2145:ITSO-CLS1:admin>svctask clearerrlog Do you really want to clear the log? y Using the -force flag will stop any confirmation requests from appearing. When executed, this command will clear all entries from the error log. This will proceed even if there are unfixed errors in the log. It also clears any status events that are in the log.

Chapter 7. SVC operations using the CLI

459

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

This is a destructive command for the error log and should only be used when you have either rebuilt the cluster, or when you have fixed a major problem that has caused many entries in the error log that you do not wish to manually fix.

7.14.7 License settings


To change the licensing feature settings, use the svctask chlicense command. Before you change the licensing, you can display the licenses you already have by issuing the svcinfo lslicense command, as shown in Example 7-211.
Example 7-211 svcinfo lslicense command

IBM_2145:ITSO-CLS1:admin>svcinfo lslicense used_flash 0.00 used_remote 0.00 used_virtualization 0.74 license_flash 50 license_remote 20 license_virtualization 80 The current license settings for the cluster are displayed in the viewing license settings log panel. These settings show whether you are licensed to use the FlashCopy, Metro Mirror, Global Mirror, or Virtualization features. They also show the storage capacity that is licensed for virtualization. Typically, the license settings log contains entries because feature options must be set as part of the Web-based cluster creation process. Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro Mirror and Global Mirror feature. The command you need to enter is shown in Example 7-212.
Example 7-212 -svctask chlicense

IBM_2145:ITSO-CLS1:admin>svctask chlicense -remote 25 To turn a feature off, just add 0 TB as capacity for the feature you want to disable. To verify that the changes you have made are reflected in your SVC configuration, you can issue the svcinfo lslicense command as before (see Example 7-213).
Example 7-213 svcinfo lslicense command: Verifying changes

IBM_2145:ITSO-CLS1:admin>svcinfo lslicense used_flash 0.00 used_remote 0.00 used_virtualization 0.74 license_flash 50 license_remote 25 license_virtualization 80

7.14.8 Listing dumps


Several commands are available for you to list the dumps that were generated over a period of time. You can use the lsxxxxdumps command, where xxxx is the object dumps, to return a list of dumps in the appropriate directory. The object dumps available are the following:

460

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

lserrlogdumps lsfeaturedumps lsiotracedumps lsiostatsdumps lssoftwaredumps ls2145dumps If no node is specified, the dumps that are available on the configuration node are listed.

Error or event dump


Dumps contained in the /dumps/elogs directory are dumps of the contents of the error and event log at the time that the dump was taken. An error or event log dump is created by using the svctask dumperrlog command. This dumps the contents of the error or event log to the /dumps/elogs directory. If no file name prefix is supplied, the default errlog_ is used. The full, default file name is errlog_NNNNNN_YYMMDD_HHMMSS. Here NNNNNN is the node front panel name. If the command is used with the -prefix option, then the value entered for the -prefix is used instead of errlog. The command to list all dumps in the /dumps/elogs directory is svcinfo lserrlogdumps (Example 7-214).
Example 7-214 svcinfo lserrlogdumps

IBM_2145:ITSO-CLS1:admin>svcinfo lserrlogdumps id filename 0 errlog_104643_080617_172859 1 errlog_104643_080618_163527 2 errlog_104643_080619_164929 3 errlog_104643_080619_165117 4 errlog_104643_080624_093355 5 svcerrlog_104643_080624_094301 6 errlog_104643_080624_120807 7 errlog_104643_080624_121102 8 errlog_104643_080624_122204 9 errlog_104643_080624_160522

Featurization log dump


Dumps contained in the /dumps/feature directory are dumps of the featurization log. A featurization log dump is created by using the svctask dumpinternallog command. This dumps the contents of the featurization log to the /dumps/feature directory to a file called feature.txt. Only one of these files exists, so every time the svctask dumpinternallog command is run, this file is overwritten. The command to list all dumps in the /dumps/feature directory is svcinfo lsfeaturedumps (Example 7-215).
Example 7-215 svctask lsfeaturedumps

IBM_2145:ITSO-CLS1:admin>svcinfo lsfeaturedumps id feature_filename 0 feature.txt

I/O trace dump


Dumps contained in the /dumps/iotrace directory are dumps of I/O trace data. The type of data that is traced depends on the options specified by the svctask settrace command. The
Chapter 7. SVC operations using the CLI

461

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

collection of the I/O trace data is started by using the svctask starttrace command. The I/O trace data collection is stopped when the svctask stoptrace command is used. When the trace is stopped, the data is written to the file. The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name, and prefix is the value entered by the user for the -filename parameter in the svctask settrace command. The command to list all dumps in the /dumps/iotrace directory is svcinfo lsiotracedumps (Example 7-216).
Example 7-216 svcinfo lsiotracedumps

IBM_2145:ITSO-CLS1:admin>svcinfo lsiotracedumps id iotrace_filename 0 tracedump_104643_080624_172208 1 iotrace_104643_080624_172451

I/O statistics dump


Dumps contained in the /dumps/iostats directory are dumps of the I/O statistics for disks on the cluster. An I/O statistics dump is created by using the svctask startstats command. As part of this command, you can specify a time interval at which you want the statistics to be written to the file (the default is 15 minutes). Every time the time interval is encountered, the I/O statistics that are collected up to this point are written to a file in the /dumps/iostats directory. The file names used for storing I/O statistics dumps are m_stats_NNNNNN_YYMMDD_HHMMSS, or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether the statistics are for MDisks or VDisks. Here NNNNNN is the node front panel name. The command to list all dumps in the /dumps/iostats directory is svcinfo lsiostatsdumps (Example 7-217).
Example 7-217 svcinfo lsiostatsdumps

IBM_2145:ITSO-CLS1:admin>svcinfo lsiostatsdumps id iostat_filename 0 Nm_stats_104603_071115_020054 1 Nn_stats_104603_071115_020054 2 Nv_stats_104603_071115_020054 3 Nv_stats_104603_071115_022057 ........

Software dump
The svcinfo lssoftwaredump command lists the contents of the /home/admin/upgrade directory. Any files in this directory are copied there at the time that you want to perform a software upgrade. Example 7-218 shows the command.
Example 7-218 svcinfo lssoftwaredumps

IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumps id software_filename 0 IBM2145_INSTALL_4.3.0.0

462

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

Other node dumps


All of the svcinfo lsxxxxdumps commands can accept a node identifier as input (for example, append the node name to the end of any of the above commands). If this identifier is not specified, then the list of files on the current configuration node is displayed. If the node identifier is specified, then the list of files on that node is displayed. However, files can only be copied from the current configuration node (using PuTTY Secure Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a non-configuration node to the current configuration node. Subsequently, you can copy them to the management workstation using PuTTY Secure Copy. For example, you discover a dump file and want to copy it to your management workstation for further analysis. In this case, you must first copy the file to your current configuration node. To copy dumps from other nodes to the configuration node, use the svctask cpdumps command. In addition to the directory, you can specify a file filter. For example, if you specified /dumps/elogs/*.txt, all files in the /dumps/elogs directory that end in .txt are copied.
.

Note: The following rules apply to the use of wildcards with the SAN Volume Controller CLI: The wildcard character is an asterisk (*). The command can contain a maximum of one wildcard. When you use a wildcard, you must surround the filter entry with double quotation marks (""), as follows:
>svctask cleardumps -prefix "/dumps/elogs/*.txt"

An example of the cpdumps command is shown in Example 7-219.


Example 7-219 svctask cpdumps

IBM_2145:ITSO-CLS1:admin>svctask cpdumps -prefix /dumps/configs n4 Now that you have copied the configuration dump file from Node n4 to your configuration node, you can use PuTTY Secure Copy to copy the file to your management workstation for further analysis, as described earlier. To clear the dumps, you can run the svctask cleardumps command. Again, you can append the node name if you want to clear dumps off a node other than the current configuration node (the default for the svctask cleardumps command). The commands in Example 7-220 clear all logs or dumps from the SVC Node n1.
Example 7-220 svctask cleardumps command

IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask

cleardumps cleardumps cleardumps cleardumps cleardumps cleardumps cleardumps

-prefix -prefix -prefix -prefix -prefix -prefix -prefix

/dumps n1 /dumps/iostats n1 /dumps/iotrace n1 /dumps/feature n1 /dumps/config n1 /dumps/elog n1 /home/admin/upgrade

n1

Chapter 7. SVC operations using the CLI

463

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Application abends dump


Dumps contained in the /dumps directory are dumps resulting from application (abnormal ends) abends. Such dumps are written to the /dumps directory. The default file names are dump.NNNNNN.YYMMDD.HHMMSS. Here NNNNNN is the node front panel name. In addition to the dump file, it is possible that there might be some trace files written to this directory. These are named NNNNNN.trc. The command to list all dumps in the /dumps directory is svcinfo ls2145dumps (Example 7-221).
Example 7-221 svcinfo ls2145dumps

IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumps id 2145_filename 0 svc.config.cron.bak_node3 1 svc.config.cron.bak_SVCNode_2 2 dump.104643.070803.015424 3 dump.104643.071010.232740 4 svc.config.backup.bak_ITSOCL1_N1

7.14.9 Backing up the SVC cluster configuration


You can back up your cluster configuration by using the Backing Up a Cluster Configuration screen or the CLI svcconfig command. This section describes the overall procedure for backing up your cluster configuration and the conditions that must be satisfied to perform a successful backup. The backup command extracts configuration data from the cluster and saves it to svc.config.backup.xml in /tmp. A file svc.config.backup.sh is also produced. You can study this file to see what other commands were issued to extract information. A log svc.config.backup.log is also produced. You can study this log for details in regard to what was done and when. This log also includes information about the other commands issued. Any pre-existing svc.config.backup.xml file is archived as svc.config.backup.bak. Only one such archive is kept. We recommend that you immediately move the .XML file and related KEY files (see limitations below) off the cluster for archiving. Then erase the files from /tmp using the svcconfig clear -all command. We also recommend that you change all objects having default names to non-default names. Otherwise, a warning is produced for objects with default names. Also the object with the default name is restored with its original name with an _r appended. The prefix _(underscore) is reserved for backup and restore command usage, and should not be used in any object names. Important: The tool backs up logical configuration data only, not client data. It does not replace a traditional data backup and restore tool, but supplements such a tool with a way to back up and restore the client's configuration. To provide a complete backup and disaster recovery solution, you must back up both user (non-configuration) data and configuration (non-user) data. After restoration of the SVC configuration, you are expected to fully restore user (non-configuration) data to the cluster's disks.

Prerequisites
You must have the following prerequisites in place: All nodes must be online.

464

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

No object name can begin with an underscore. All objects should have non-default names, that is, names that are not assigned by the SAN Volume Controller. Although we recommend that objects have non-default names at the time the backup is taken, this is not mandatory. Objects with default names are renamed when they are restored. Example 7-222 shows an example of the svcconfig backup command.
Example 7-222 svcconfig backup command

IBM_2145:ITSO-CLS1:admin>svcconfig backup ...... CMMVC6130W Inter-cluster partnership fully_configured will not be restored ................... CMMVC6112W io_grp io_grp0 has a default name CMMVC6112W io_grp io_grp1 has a default name CMMVC6112W mdisk mdisk18 has a default name CMMVC6112W mdisk mdisk19 has a default name CMMVC6112W mdisk mdisk20 has a default name ................ CMMVC6136W No SSH key file svc.config.admin.admin.key CMMVC6136W No SSH key file svc.config.admincl1.admin.key CMMVC6136W No SSH key file svc.config.ITSOSVCUser1.admin.key ....................... CMMVC6112W vdisk vdisk7 has a default name ................... CMMVC6155I SVCCONFIG processing completed successfully Example 7-223 shows the pscp command.
Example 7-223 pscp command

C:\Program Files\PuTTY>pscp -load SVC_CL1 admin@9.43.86.117:/tmp/svc.config.backup.xml c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100% The following scenario illustrates the value of configuration backup: 1. Use the svcconfig command to create a backup file on the cluster that contains details about the current cluster configuration. 2. Store the backup configuration on some form of tertiary storage. You must copy the backup file from the cluster or it becomes lost if the cluster crashes. 3. If a severe enough failure occurs, the cluster might be lost. Both configuration data (for example, the cluster definitions of hosts, I/O groups, MDGs, and MDisks) and the application data on the virtualized disks are lost. In this scenario, it is assumed that the application data can be restored from normal client backup procedures. However, before you can carry this out, you must reinstate the cluster, as configured at the time of the failure. This means you restore the same MDGs, I/O groups, host definitions, and the VDisks that existed prior to the failure. Then you can copy the application data back onto these VDisks and resume operations. 4. Recover the hardware. This includes hosts, SVCs, disk controller systems, disks, and SAN fabric. The hardware and SAN fabric must physically be the same as those used before the failure.

Chapter 7. SVC operations using the CLI

465

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

5. Re-initialize the cluster, just with the config node; the other nodes will be recovered restoring the configuration. 6. Restore your cluster configuration using the backup configuration file generated prior to the failure. 7. Restore the data on your virtual disks (VDisks) using your preferred restore solution or with help from IBM Service. 8. Resume normal operations.

7.14.10 Restoring the SVC cluster configuration


It is very important that you Always consult IBM Support BEFORE you restore the SVC cluster configuration from backup. That is done to assist you in analyzing the root cause of why the cluster configuration got lost. After the svcconfig restore -execute command is started, any prior user data on the VDisks should be considered destroyed and must be recovered through your usual application data backup process. See IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line Interface User's Guide, SC26-7544 for more information about this topic. For a detailed description of the SVC configuration backup and restore functions, see IBM TotalStorage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543.

7.14.11 Deleting configuration backup


This section details the tasks that you can perform to delete the configuration backup that are stored in the configuration file directory on the cluster. Never clear this config without having backup of your configuration stored in a secure place. When using the clear command you will erase files in the /tmp directoy.This command does not clear the running config leaving the cluster in a not working state, but it clears all configuration stored in the /tmp directory. Example 7-224
Example 7-224 svcconfig clear command.

IBM_2145:ITSO-CLS1:admin>svcconfig clear -all . CMMVC6155I SVCCONFIG processing completed successfully

7.15 SAN troubleshooting and data collection


When we encounter a SAN problem or issue, the SAN Volume Controller has often been very helpful in troubleshooting the SAN since it is at the heart of the environment where most of the communication travels through. In chapter 14 in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521 there is a detailed description of how to troubleshoot and collect data from the SVC: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

466

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch07.CLI Operations Pall and Angelo.fm

7.16 T3 recovery process


A procedure called T3 recovery has been tested and used in select cases where the cluster has been completely destroyed. (An example would be simultaneously pulling power cords from all nodes to their UPSs; in this case, all nodes would boot up to node error 578 when power was restored.) This procedure, in certain circumstances, is able to recover most user data. However, this procedure is not to be used by the customer or IBM CE without direct involvement from IBM level 3 support. It is not published, but we refer to it here only to indicate that loss of a cluster can be recoverable without total data loss, but requires a restore of application data from backup. It is a very sensitive procedure and only to be used as a last resort, and cannot recover any data unstaged from cache at the time of total cluster failure.

Chapter 7. SVC operations using the CLI

467

6423ch07.CLI Operations Pall and Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

468

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

81

Chapter 8.

SVC operations using the GUI


In this chapter, operational management will be shown using the SVC GUI. We have divided this chapter into what we will describe as normal operations and advanced operations. We describe the basic configuration procedures required to get your SVC environment up and running as quickly as possible using the Master Console and its associated GUI. Chapter 2, IBM System Storage SAN Volume Controller overview on page 7 describes the features in greater depth. In this chapter we focus on the operational aspects.

Copyright IBM Corp. 2009. All rights reserved.

469

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.1 SVC normal operations using the GUI


In this topic we discuss some of the operations that we have defined as normal, day-to-day activities. It is possible for many users to be logged into the GUI at any given time, but be aware that there is no locking mechanism, so if two users are changing the same object the last action entered from the GUI is the one that will take effect. Important: Data entries made through the GUI are case sensitive.

8.1.1 Organizing on screen content


In the following sections, there are several windows within the SVC GUI where you can perform filtering (to minimize the amount of data shown on window) and sorting (to organize the content on the window). This section provides a brief overview of these functions. The SVC Welcome screen (Figure 8-1) is an important one and will be referred to as the Welcome screen throughout this chapter, and we will expect users to be able to locate this screen without us having to show it each time.

Figure 8-1 The Welcome screen

From the Welcome screen select the Work with Virtual Disks option, and select the Virtual Disks link.

Table filtering
When you are in the Viewing Virtual Disks list, you can use the table filter option to filter the visible list, which is useful if the list of entries is too large to work with. You can change the filtering here as many times as you like, to further reduce the lists or for different views. Use the Filter Row Icon, as shown in Figure 8-2, or use the Show Filter Row option in the drop-down menu and click Go.

470

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-2 Filter Row icon

This enables you to filter based on the column names, as shown in Figure 8-3. The Filter under each column name shows that no filter is in effect for that column.

Figure 8-3 Show filter row

If you want to filter on a column, click the word Filter, which opens up a filter dialog, as shown in Figure 8-4 on page 472.

Chapter 8. SVC operations using the GUI

471

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-4 Filter option on Name

A list with VDisks is displayed that only contains 01 somewhere in the name, as shown in Figure 8-5. (Notice the filter line under each column heading showing that our filter is in place.) If you want, you can perform some additional filtering on the other columns to further narrow your view.

Figure 8-5 Filtered on Name containing 01 in the name.

The option to reset the filters is shown in Figure 8-6. Use the Clear All Filters icon or use the Clear All Filters option in the drop-down menu and click Go.

472

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-6 Clear All Filter options

Sorting
Regardless of whether you use the pre-filter or additional filter options, when you are in the Viewing Virtual Disks window, you can sort the displayed data by selecting Edit Sort from the list and clicking Go, or you can click the small icon highlighted by the mouse pointer in Figure 8-7.

Figure 8-7 Selecting Edit Sort

As shown in Figure 8-8, you can sort based on up to three criteria, including Name, State, I/O Group, MDisk Group, Capacity (MB), Space-Efficient, Type, Hosts, FlashCopy Pair, FlashCopy Map Count, Relationship Name, UID, and Copies. Note: The actual sort criteria differs based on the information that you are sorting.

Chapter 8. SVC operations using the GUI

473

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-8 Sorting criteria

When you finish making your choices, click OK to regenerate the display based on your sorting criteria. Look at the icons next to each column name to see the sort criteria currently in use, as shown in Figure 8-9. If you want to clear the sort, simply select Clear All Sorts from the list and click Go, or click the icon highlighted by the mouse pointer in Figure 8-9.

Figure 8-9 Selecting to clear all sorts

474

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.1.2 Documentation
If you need to access the online documentation, in the upper right corner of the window, click the icon. This opens the Help Assistant pane on the left side of the window, as shown in Figure 8-10.

Figure 8-10 Online help using the i icon

8.1.3 Help
If you need to access the online help, in the upper right corner of the window, click the icon. This opens a new window called Information Center. Here you can search on any item you want help for (see Figure 8-11 on page 476).

Chapter 8. SVC operations using the GUI

475

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-11 Online help using the ? icon

8.1.4 General housekeeping


If at any time the content in the right side of the frame is abbreviated, you can collapse the My Work column by clicking the icon at the top of the My Work column. When collapsed, the small arrow changes from pointing to the left to pointing to the right ( ). Clicking the small arrow that points right expands the My Work column back to its original size. In addition, each time you open a configuration or administration window using the GUI in the following sections, it creates a link for that window along the top of your Web browser beneath the main banner graphic. As a general housekeeping task, we recommend that you close each window when you finish using it by clicking the icon to the right of the window name, but below the icon. Be careful not to close the entire browser.

8.1.5 Viewing progress


With this view you can see the status of activities such as VDisk Migration, MDisk Removal (Figure 8-12 on page 477), Image Mode Migration, Extend Migration, FlashCopy, Metro Mirror and Global Mirror, VDisk Formatting, Space Efficient copy repair, VDisk copy verification and VDisk copy synchronization. You can see detailed information of the item by pressing the underlined (progress) number in the Progress column.

476

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-12 Showing possible processes to view where MDisk is being removed from MDG

8.2 Working with managed disks


This section describes the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment. This section details the tasks that you can perform at a disk controller level.

8.2.1 Viewing disk controller details


Perform the following steps to view information about a back-end disk controller in use by the SVC environment: 1. Select the Work with Managed Disks option and then the Disk Controller Systems link. 2. The Viewing Disk Controller Systems window (Figure 8-13) opens. For more detailed information about a specific controller, click its ID (highlighted by the mouse cursor in Figure 8-13).

Figure 8-13 Disk controller systems

3. When you click the controller Name (Figure 8-13), the Viewing General Details window (Figure 8-14) opens for the controller (where Name is the Controller you selected). Review the details and click Close to return to the previous window.

Chapter 8. SVC operations using the GUI

477

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-14 Viewing general details about a disk controller

8.2.2 Renaming a disk controller


Perform the following steps to rename a disk controller used by the SVC cluster: 1. Select the radio button to the left of the controller you want to rename. Then select Rename a Disk Controller System from the list and click Go. 2. In the Renaming Disk Controller System controllername window (where controllername is the controller you selected in the previous step), type the new name you want to assign to the controller and click OK. See Figure 8-15.

Figure 8-15 Renaming a controller

3. You return to the Disk Controller Systems window. You should now see the new name of your controller displayed. Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number, the dash, or the word controller, because this prefix is reserved for SVC assignment only.

478

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.2.3 Discovery status


You can view the status of a managed disk (MDisk) discovery from the Viewing Discovery Status window. This tells you if there is an ongoing MDisk discovery. A running MDisk discovery will be displayed with a status of Active. Perform the following steps to view the status of an MDisk discovery: 1. Select Work with Managed Disks Discovery Status. The Viewing Discovery Status window is displayed, as shown in Figure 8-16.

Figure 8-16 Discovery status view

2. Click Close to close this window.

8.2.4 Managed disks


This section details the tasks that can be performed at an MDisk level. You perform each of the following tasks from the Managed Disks window (Figure 8-17). To access this window, from the SVC Welcome screen, click the Work with Managed Disks option and then the Managed Disks link.

Figure 8-17 Viewing Managed Disks window

8.2.5 MDisk information


To retrieve information about a specific MDisk, perform the following steps: 1. In the Viewing Managed Disks window (Figure 8-18), click the underlined name of any MDisk in the list to reveal more detailed information about the specified MDisk.

Chapter 8. SVC operations using the GUI

479

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-18 Managed disk details

Tip: If at any time the content in the right side of frame is abbreviated, you can minimize the My Work column by clicking the arrow to the right of the My Work heading at the top right of the column (highlighted with the mouse pointer in Figure 8-17 on page 479). After you minimize the column, you see an arrow in the far left position in the same location where the My Work column formerly appeared. 2. Review the details and then click Close to return to the previous window.

8.2.6 Renaming an MDisk


Perform the following steps to rename an MDisk controlled by the SVC cluster: 1. Select the radio button to the left of the MDisk that you want to rename in the window shown in Figure 8-17 on page 479. Select Rename an MDisk from the list and click Go. 2. On the Renaming Managed Disk MDiskname window (where MDiskname is the MDisk you selected in the previous step), type the new name you want to assign to the MDisk and click OK. See Figure 8-19 on page 481.

480

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number, the dash, or the word MDisk, because this prefix is reserved for SVC assignment only.

Figure 8-19 Renaming an MDisk

8.2.7 Discovering MDisks


Perform the following steps to discover newly assigned MDisks: 1. Select Discover MDisks from the drop-down list shown in Figure 8-17 on page 479 and click Go. 2. Any newly assigned MDisks are displayed in the window shown in Figure 8-20.

Figure 8-20 Newly discovered managed disks

8.2.8 Including an MDisk


If a significant number of errors occurs on an MDisk, the SVC automatically excludes it. These errors can be from a hardware problem, a storage area network (SAN) zoning problem, or the result of poorly planned maintenance. If it is a hardware fault, you should receive SNMP alerts in regard to the state of the hardware (before the disk were excluded) and undertaken preventative maintenance. If not, the hosts that were using VDisks, which used the excluded MDisk, now have I/O errors.

Chapter 8. SVC operations using the GUI

481

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

After you take the necessary corrective action to repair the MDisk (for example, replace the failed disk and repair SAN zones), you can tell the SVC to include the MDisk again.

8.2.9 Showing a VDisk using a certain MDisk


To display information about VDisks that reside on an MDisk, perform the following steps: 1. Select the radio button, as shown in Figure 8-21 on page 482, to the left of the MDisk you want to obtain VDisk information about. Select Show VDisks using this MDisk from the list and click Go.

Figure 8-21 Show VDisk using an MDisk

2. You now see a subset (specific to the MDisk you chose in the previous step) of the View Virtual Disks window in Figure 8-22. We cover the View Virtual Disks window in more detail in 8.4, Working with hosts on page 493.

Figure 8-22 VDisk list from a selected MDisk

482

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.3 Working with managed disk groups (MDG)


This section details the tasks that can be performed with the managed disk group. From the Welcome screen shown in Figure 8-1 on page 470 select Working with MDisks.

8.3.1 Viewing MDisk group information


Each of the following tasks are performed from the View Managed Disk Groups window (Figure 8-23). To access this window, from the SVC Welcome screen, click the Work with Managed Disks option and then the Managed Disk Groups link.

Figure 8-23 Viewing MDGs

To retrieve information about a specific MDG, perform the following steps: 1. In the Viewing Managed Disk Groups window (Figure 8-23), click the underlined name of any MDG in the list. 2. In the View MDisk Group Details window (Figure 8-24), you see more detailed information about the specified MDisk. Here you see information pertaining to the number of MDisks and VDisks as well as the capacity (both total and free space) within the MDG. When you finish viewing the details, click Close to return to the previous window.

Figure 8-24 MDG details

8.3.2 Creating MDisk groups


Perform the following steps to create a managed disk group (MDG):

Chapter 8. SVC operations using the GUI

483

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

1. From the SVC Welcome screen (Figure 8-1 on page 470), select the Work with Managed Disks option and then the Managed Disks Groups link. 2. The Viewing Managed Disks Groups window opens (see Figure 8-25 on page 484). Select Create an MDisk Group from the list and click Go.

Figure 8-25 Selecting the option to create an MDisk group

3. In the window Create Managed Disk Group, the wizard will give you an overview of what will be done. Click Next. 4. While in the window Name the group and select the managed disks (Figure 8-26 on page 485), follow these steps: a. Type a name for the MDG. Note: If you do not provide a name, the SVC automatically generates the name MDiskgrpX, where X is the ID sequence number assigned by the SVC internally. If you want to provide a name (as we have), you can use the letters A to Z, a to z, numbers 0 to 9, and the underscore. It can be between one and 15 characters in length and is case sensitive, but cannot start with a number or the word MDiskgrp, because this prefix is reserved for SVC assignment only. b. From the MDisk Candidates box as shown in Figure 8-26, one at a time, select the MDisks to put into the MDG. Click Add to move them to the Selected MDisks box. There may be more than one page of disks; you may navigate between the windows (the MDisks you selected will be preserved). c. You can specify a threshold to send a warning to the error log when the capacity is first exceeded. It can either be a percentage or a specific amount. d. Click Next.

484

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-26 Name the group and select the managed disks window

5. From the list shown in Figure 8-27, select the extent size to use. When you select a specific extent size, typical value is 512, it will display the total cluster size in TB. Select Next.

Figure 8-27 Select Extent Size window

6. In the window Verify Managed Disk Group (Figure 8-28), verify that the information specified is correct. Click Finish.

Chapter 8. SVC operations using the GUI

485

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-28 Verify MDG wizard

7. Return to the Viewing Managed Disk Groups window (Figure 8-29) where the MDG is displayed.

Figure 8-29 A new MDG added successfully

You have now completed the tasks required to create an MDG.

8.3.3 Renaming an MDisk group


To rename an managed disks group, perform the following steps: 1. Select the radio button in the Viewing Managed Disk Groups window (Figure 8-30) to the left of the MDG you want to rename. Select Modify an MDisk Group from the list and click Go.

Figure 8-30 Renaming an MDG

486

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

From the Renaming Managed Disk Group MDGname window (where MDGname is the MDG you selected in the previous step), type the new name you want to assign and click OK (see Figure 8-31). You can also set/change the usage threshold from this window. Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, a dash, and the underscore. It can be between one and 15 characters in length, but cannot start with a number, a dash, or the word mdiskgrp, because this prefix is reserved for SVC assignment only.

Figure 8-31 Renaming an MDG

It is considered a best practice to enable the capacity warning for your MDGs. The range used should be addressed in the planning phase of the SVC installation, though this range can always be changed without interruption.

8.3.4 Deleting an MDisk group


To delete a managed disk group, perform the following steps: 1. Select the radio button to the left of the MDG you want to delete. Select Delete an MDisk Group from the list and click Go. 2. In the Deleting a Managed Disk Group MDGname window (where MDGname is the MDG you selected in the previous step), click OK to confirm that you want to delete the MDG (see Figure 8-32).

Figure 8-32 Deleting an MDG

3. If there are MDisks and VDisks within the MDG you are deleting, you are required to click Forced delete for the MDG (Figure 8-33 on page 488).

Chapter 8. SVC operations using the GUI

487

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Important: If you delete an MDG with the Forced Delete option, and VDisks were associated with that MDisk group, you will lose the data on your VDisks, since they are deleted before the MDisk Group. If you want to save your data, migrate or mirror the VDisks to another MDisk group before you delete the MDisk group previously assigned to it.

Figure 8-33 Confirming forced deletion of an MDG

8.3.5 Adding MDisks


If you created an empty MDG or you simply assign additional MDisks to your SVC environment later, you can add MDisks to existing MDGs by performing the following steps: Note: You can only add unmanaged MDisks to an MDG. 1. Select the radio button (Figure 8-34) to the left of the MDG to which you want to add MDisks. Select Add MDisks from the list and click Go.

Figure 8-34 Adding an MDisk to an existing MDG

2. From the Adding Managed Disks to Managed Disk Group MDiskname window (where MDiskname is the MDG you selected in the previous step), select the desired MDisk or MDisks from the MDisk Candidates list (Figure 8-35). After you select all the desired MDisks, click OK.

488

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-35 Adding MDisks to an MDG

8.3.6 Removing MDisks


To remove an MDisk from an MDG, perform the following steps: 1. Select the radio button to the left (Figure 8-36) of the MDG from which you want to remove an MDisk. Select Remove MDisks from the list and click Go.

Figure 8-36 Viewing MDGs

2. From the Deleting Managed Disks from Managed Disk Group MDGname window (where MDGname is the MDG you selected in the previous step), select the desired MDisk or MDisks from the list (Figure 8-37). After you select all the desired MDisks, click OK.

Chapter 8. SVC operations using the GUI

489

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-37 Removing MDisks from an MDG

3. If VDisks are using the MDisks that you are removing from the MDG, you are required to click the Forced Delete button to confirm the removal of the MDisk, as shown in Figure 8-38. 4. An error messages is displayed if there is not sufficient space to migrate the VDisk data to other extents on other MDisks that MDG.

Figure 8-38 Confirming forced deletion of MDisk from MDG

8.3.7 Displaying MDisks


If you want to view what MDisks are configured on your system perform the following steps to display MDisks: 1. From the SVC Welcome screen (Figure 8-1 on page 470), select the Work with Managed Disks option and then the Managed Disks link.In the Viewing Managed Disks window (Figure 8-39), if your MDisks are not displayed, rescan the Fibre Channel network. Select Discover MDisks from the list and click Go.

490

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-39 Discover MDisks

Note: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem are properly assigned to the SVC (for example, using storage partitioning with a DS4000) and that appropriate zoning is in place (for example, the SVC can see the disk subsystem).

8.3.8 Showing MDisks in this group


To show a list of MDisks within an MDG, perform the following steps: 1. Select the radio button to the left (Figure 8-40) of the MDG from which you want to retrieve MDisk information. Select Show MDisks in this group from the list and click Go.

Figure 8-40 View MDGs

2. You now see a subset (specific to the MDG you chose in the previous step) of the Viewing Managed Disk window (Figure 8-41) shown in 8.2.4, Managed disks on page 479.

Chapter 8. SVC operations using the GUI

491

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-41 Viewing MDisks in an MDG

Note: Remember, you can collapse the column entitled My Work at any time by clicking the arrow to the right of the My Work column heading.

8.3.9 Showing the VDisks associated with an MDisk group


To show a list of VDisks associated with MDisks within an MDG, perform the following steps: 1. Select the radio button to the left (Figure 8-42) of the MDG from which you want to retrieve VDisk information. Select Show VDisks using this group from the list and click Go.

Figure 8-42 View MDisks

2. You see a subset (specific to the MDG you chose in the previous step) of the Viewing Virtual Disks window in Figure 8-43. We cover the Viewing Virtual Disks window in more detail in VDisk information on page 504.

492

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-43 VDisks belonging to selected MDG

You have now completed the tasks required to manage the disk controller systems, managed disks, and MDGs within the SVC environment.

8.4 Working with hosts


In this section, we describe the various configuration and administration tasks that you can perform on the host connected to your SVC. For more details about connecting hosts to a SVC in a SAN environment there is more detailed information in IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905-05. Starting with SVC 5.1, iSCSI is introduced as an additional method for connecting your host to the SVC. With this option the host can now choose between Fibre Channel or iSCSI as the connection method. After the connection type has been selected all further work with the host is identical for the Fibre Channel attached host and the iSCSI attached host. To access the Viewing Hosts window at the SVC Welcome screen on Figure 8-1 on page 470, click the Work with Hosts option and then the Hosts link. The Viewing Hosts window will appear, as shown in Figure 8-44 on page 493. Each of the tasks shown in the following sections are performed from the Viewing Hosts window.

Figure 8-44 Viewing hosts

Chapter 8. SVC operations using the GUI

493

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.4.1 Host information


To retrieve information about a specific host, perform the following steps: 1. In the Viewing Hosts window (see Figure 8-44), click the underlined name of any host in the list displayed. 2. Next, you can obtain details for the host you requested: a. In the Viewing General Details window (Figure 8-45), you can see more detailed information about the specified host.

Figure 8-45 Host details

b. You can click the Port Details (Figure 8-46) link to see attachment information such as the WWPNs or IQN defined for this host.

Figure 8-46 Host port details

c. You can click Mapped I/O Groups (Figure 8-47) to see which I/O groups this host can access.

494

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-47 Host mapped I/O groups

d. As stated before, a new feature in SVC 5.1 is that we can now create hosts that are either using Fibre Channel connections or iSCSI connections. If you select iSCSI for our host in this example we will not see any iSCSI parameters (as shown in Figure 8-48) since this host has already got an FC port configured as shown in Figure 8-46 on page 494.

Figure 8-48 iSCSI parameters

When you are finished viewing the details, click Close to return to the previous window.

8.4.2 Creating a host


As we have two types of connection methods to choose for our host, iSCSI or Fibre Channel we will show both methods.

8.4.3 Fibre Channel attached hosts


To create a new host that uses the Fibre Channel connection type, perform the following steps: 1. As shown in Figure 8-49, select the option Create a Host from the list and click Go.

Chapter 8. SVC operations using the GUI

495

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-49 Create a host

2. In the Creating Hosts window (Figure 8-50 on page 497), type a name for your host (Host Name). Note: If you do not provide a name, the SVC automatically generates the name hostX (where X is the ID sequence number assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number or the word host, because this prefix is reserved for SVC assignment only. Although using an underscore might work in some circumstances, it violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and can cause problems. So we recommend that you do not use the underscore in host names. 3. Select the mode (Type) for the host. Default is Generic and should be used for all host except if you are using HP-UX or SUN then you need to select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. 4. Connection type is either Fibre Channel or iSCSI. If you select Fibre Channel you are asked for the port mask and WWPN of the server you are creating. If you select iSCSI you are asked for the iSCSI initiator, commonly called IQN and CHAP authentication secret to ensure authentication of the target host and volume access. 5. You can use a port mask to control the node target ports that a host can access. The port mask applies to logins from the host initiator port that are associated with the host object. Note: For each login between a host HBA port and node port, the node examines the port mask that is associated with the host object for which the host HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). The right-most bit in the mask corresponds to the lowest numbered SVC port (1 not 4) on a node. As shown in Figure 8-50 on page 497, our port mask is 1111; this means that the host HBA port can access all node ports. If, for example, a port mask is set to 0011, only port 1 and port 2 are enabled for this host access.

496

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

6. Select and add the worldwide port names (WWPNs) that correspond to your HBA or HBAs. Click OK. In some cases, your WWPNs might not be displayed, although you are sure that your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. In this case, you can manually type the WWPN of your HBA or HBAs into the Additional Ports field (type in WWPNs, one per line) at the bottom of the window and mark Do not validate WWPN before you click OK.

Figure 8-50 Creating a new Fibre Channel connected host

This brings you back to the viewing host window (Figure 8-51) where you can see the newly added host.

Figure 8-51 Create host results

8.4.4 iSCSI attached hosts


We will now show how to configure a host that is connected using iSCSI.

Chapter 8. SVC operations using the GUI

497

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Prior to starting to use iSCSI possibility we need to configure our cluster to use the iSCSI option. This is shown in iSCSI attached hosts on page 497. When creating an iSCSI attached host from the Welcome window select Working with hosts and from there we select Hosts. From the drop-down list we select Create a Host as show in Figure 8-52 on page 498.

Figure 8-52 Creating iSCSI host

In the Creating Hosts window (Figure 8-53), type a name for your host (Host Name). 1. Select the mode (Type) for the host. Default is Generic and should be used for all hosts except if you are using HP-UX or SUN then you need to select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. 2. Connection type is iSCSI. 3. iSCSI initiator or IQN is iqn.1991-05.com.microsoft:freyja. This is obtained from the server and has in general the same purpose as the WWPN. 4. The CHAP secret is the authentication method used to restrict access for other iSCSI hosts to use the same connection. The CHAP can be set for the whole cluster under cluster properties or for each host definition. The CHAP needs to be identical on the server and the cluster/host definition. It is possible to create an iSCSI host definition without using a CHAP. In Figure 8-53 we set the parameters for our host called Freyja.

498

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-53 iSCSI parameters.

The iSCSI host is now created.

8.4.5 Modifying a host


To modify a host, perform the following steps: 1. Select the radio button to the left of the host you want to rename (Figure 8-54). Select Modify host from the list and click Go.

Figure 8-54 Modifying a host

2. From the Modifying Host window (Figure 8-55), type the new name you want to assign or change the Type parameter and click OK.

Chapter 8. SVC operations using the GUI

499

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length. If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number or the word host, because this prefix is reserved for SVC assignment only. While using an underscore might work in some circumstances, it violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and thus can cause problems. So we recommend that you do not use the underscore in host names.

Figure 8-55 Modifying a host (choosing a new name)

8.4.6 Deleting a host


To delete a Host, perform the following steps: 1. Select the radio button to the left of the host you want to delete (Figure 8-56 on page 500). Select Delete a Host from the list and click Go.

Figure 8-56 Deleting a host

2. In the Deleting Host hostname window (where hostname is the host you selected in the previous step), click OK if you are sure you want to delete the host. See Figure 8-57.

500

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-57 Deleting a host

3. If you still have VDisks associated with the host, you will see a window (Figure 8-58) requesting confirmation for the forced deletion of the host. Click OK and all the mappings between this host and its VDisks are deleted before the host is deleted.

Figure 8-58 Forcing a deletion

8.4.7 Adding ports


If you add an HBA or a NIC to a server that is already defined within the SVC, you can simply add additional ports to your host definition by performing the following steps: Note: A host definition can only have Fibre Channel ports or iSCSI port defined, not both. 1. Select the radio button to the left of the host to which you want to add ports to as shown in (Figure 8-59). Select Add Ports from the list and click Go.

Figure 8-59 Adding ports to a host

2. From Adding ports you can select if you are adding a Fibre Channel port (WWPN) or an iSCSI (IQN initiator). Select either the desired WWPN from the Available Ports list and click Add, or enter the new IQN in the iSCSI window. After adding the WWPN or IQN click OK. See Figure 8-60 on page 502. If your WWPNs are not in the list of the Available Ports and you are sure your adapter is functioning (for example, you see WWPN in the switch name server) and your zones are correctly set up, then you can manually type the WWPN of your HBAs into the Add Additional Ports field at the bottom of the window before you click OK.

Chapter 8. SVC operations using the GUI

501

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-60 Adding WWPN ports to a host

Figure 8-61 on page 502 shows where IQN is added to our host called Thor.

Figure 8-61 Adding IQN port to a host

8.4.8 Deleting ports


To delete a port from a host, perform the following steps: 1. Select the radio button to the left of the host from which you want to delete a port (Figure 8-62). Select Delete Ports from the list and click Go.

Figure 8-62 Delete ports from a host

502

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

2. On the Deleting Ports From hostname window (where hostname is the host you selected in the previous step), start by selecting the connection type of the port you want to delete. If you select Fibre Channel port, you select the port you want to delete from the Available Ports list and click Add. When you have selected all the ports you want to delete from your host to the column to the right, click OK. If you selected connection type as iSCSI you select the ports from the available iSCSI initiator and click Add. Figure 8-63 on page 503 show how we select WWPN port to delete and in Figure 8-64 we have selected an iSCSI initiator to delete.

Figure 8-63 Deleting WWPN port from a host

Figure 8-64 Deleting iSCSI initiator from an host

3. If you have VDisks that are associated with the host, you will receive a warning about deleting a host port. You need to confirm your action when prompted, as shown in Figure 8-65. A similar warning messages appears if you are deleting an iSCSI port.

Chapter 8. SVC operations using the GUI

503

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-65 Port deletion confirmation

8.5 Working with virtual disks


In this section, we describe the tasks that you can perform at a VDisk level.

8.5.1 Using the Virtual Disks window for VDisks


Each of the following tasks are performed from the Viewing Virtual Disks window (Figure 8-66 on page 504). To access this window, from the SVC Welcome screen click the Work with Virtual Disks option and then the Virtual Disks link. The drop-down menu contains all the actions you can perform in the Virtual Disk window.

Figure 8-66 Viewing Virtual Disks

8.5.2 VDisk information


To retrieve information about a specific VDisk, perform the following steps:

504

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

1. In the Viewing Virtual Disks window, click the underlined name of the desired VDisk in the list. 2. The next window (Figure 8-67) that opens shows detailed information. Review the information. When you are done, click Close to return to the Viewing Virtual Disks window.

Figure 8-67 VDisk details

8.5.3 Creating a VDisk


To create a new VDisk, perform the following steps: 1. Select Create a VDisk from the list (Figure 8-66 on page 504) and click Go. 2. The Create Virtual Disks wizard launches. Click Next. 3. The Select groups window opens. Choose an I/O group and then a preferred node (see Figure 8-68). In our case, we let the system choose. Click Next.

Figure 8-68 Creating a VDisk:Select Groups.

4. The Set attributes window opens (Figure 8-69).


Chapter 8. SVC operations using the GUI

505

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

a. Choose what type of VDisk you want to create, striped or sequential. b. Select the cache mode, Read/Write or None. c. If you want, enter a unit device identifier. d. Enter the number of VDisks you want to create e. You can select the Space-efficient or Mirrored Disk check box, which will expand their respective sections with extra options. f. Optionally, format the new VDisk by selecting the Format VDisk before use check box (write zeros to its managed disk extents). g. Click Next.

Figure 8-69 Creating a VDisk: Set Attributes.

5. Select the MDG from which you want the VDisk to be a member of. a. If you selected Striped, you will see the window shown in Figure 8-70. You must select the MDisk group, and then the Managed Disk Candidates window will appear. You can optionally add some MDisks to be striped.

Figure 8-70 Selecting an MDG

506

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

b. If you selected Sequential mode, you will see the window shown in Figure 8-71. You must select the MDisk group, and then Managed Disks will appear. You need to choose at least one MDisk as a managed disk.

Figure 8-71 Creating a VDisk wizard: Select attributes for sequential mode VDisks

c. Enter the size of the VDisk you want to create and select the capacity measurement (MB or GB) from the list. Note: An entry of 1 GB uses 1024 MB. d. Click Next. 6. You can enter the VDisk name if you want to create just one VDisk, or the naming prefix if you want to create multiple VDisks. Click Next. Tip: When you create more than one VDisk, the wizard will not ask you for a name for each VDisk to be created. Instead, the name you use here will be a prefix and have a number, starting at zero, appended to it as each one is created.

Figure 8-72 Creating a VDisk wizard: Name the VDisk(s)

Chapter 8. SVC operations using the GUI

507

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: If you do not provide a name, the SVC automatically generates the name VDiskX (where X is the ID sequence number assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length, but cannot start with a number or the word VDisk, because this prefix is reserved for SVC assignment only. 7. In the Verify VDisk window (see Figure 8-73 for striped and Figure 8-74 on page 509 for sequential), check if you are satisfied with the information shown, then click Finish to complete the task. Otherwise, click Back to return and make any corrections.

Figure 8-73 Creating a VDisk wizard: Verify VDisk Striped type

508

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-74 Creating a VDisk wizard: Verify VDisk sequential type

8. Figure 8-75 on page 509 shows the progress of the creation of your VDisks on storage and the final results.

Figure 8-75 Creating a VDisk wizard: final result

8.5.4 Creating a space-efficient VDisk with auto-expand


Using space-efficient VDisks allows you to commit the minimal amount of space while promising an allocation that may be larger than the available free storage.

Chapter 8. SVC operations using the GUI

509

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

In this section, we are going to create a space-efficient VDisk (SEV disk) step-by-step. This will allow you to create VDisks with much higher capacity than is physically available (this is called thin provisioning). As the host using this VDisk starts utilizing up to the level of the real allocation, the SVC can dynamically grow (when you enable auto expand) until it reaches the virtual capacity limit or the Managed Disk Group physically runs out of free space. For the latter scenario, this will cause the growing VDisk to go offline, affecting the host using that VDisk. Therefore, enabling threshold warnings is important and recommended. Perform the following steps to create a Space-Efficient VDisk with auto expand: 1. Select Create a VDisk from the list (Figure 8-66 on page 504) and click Go. 2. The Create Virtual Disks wizard launches. Click Next. 3. The Select groups window opens. Choose an I/O group and then a preferred node (see Figure 8-76 on page 510). In our case, we let the system choose. Click Next.

Figure 8-76 Creating a VDisk wizard: Select Groups

4. The Set attributes window opens (Figure 8-69 on page 506). a. Choose what type of VDisk you want to create, striped or sequential. b. Select the cache mode, Read/Write or None. c. If you want, enter a unit device identifier. d. Enter the number of VDisks you want to create e. Select the Space-efficient check box, which will expand this section with the following options: i. Type the size of the VDisk Capacity (remember, this is the virtual size). ii. Type in a percentage or select a specific size for the usage threshold warning. iii. Select the Auto expand check box. This will allow the real disk size to grow as required. iv. Select the Grain size (choose 32 KB normally, but match the FlashCopy grain size, which is 256 KB, if the VDisk is being used for FlashCopy). f. Optionally, format the new VDisk by selecting the Format VDisk before use check box (write zeros to its managed disk extents) g. Click Next.

510

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-77 Creating a VDisk wizard: Set Attributes

5. In the window, Select MDisk(s) and Size for a <modetype>-Mode VDisk, as shown in Figure 8-78 on page 511, and follow these steps: a. Select the Managed Disk Group from the list. b. Optionally, choose the Managed Disk Candidates upon which to create the VDisk. Click Add to move them to the Managed Disks Striped in this Order box. c. Type in the Real size you wish to allocate. This is how much disk space will actually be allocated. It can either be a percentage of the virtual size or a specific number.

Figure 8-78 Creating a VDisk wizard: Selecting MDisk(s) and sizes

6. In the window Name the VDisk(s) (Figure 8-79 on page 512), type a name for the VDIsk you are creating. In our case, we used vdisk_sev2. Click Next.

Chapter 8. SVC operations using the GUI

511

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-79 Name the VDisk(s) window

7. In the Verify Attributes window (Figure 8-80), verify the selections. We can select the Back button at any time to make changes.

Figure 8-80 Verifying space-efficient VDisk Attributes window

8. After selecting the Finish option, we are presented with a window (Figure 8-81 on page 513) that tells us the result of the action.

512

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-81 Space-efficient VDisk creation success

8.5.5 Deleting a VDisk


To delete a VDisk, perform the following steps: 1. Select the radio button to the left of the VDisk you want to delete (Figure 8-66 on page 504). Select Delete a VDisk from the list and click Go. 2. In the Deleting Virtual Disk VDiskname window (where VDiskname is the VDisk you just selected), click OK to confirm your desire to delete the VDisk. See Figure 8-82.

Figure 8-82 Deleting a VDisk

If the VDisk is currently assigned to a host, you receive a secondary message where you must click Forced Delete to confirm your decision. See Figure 8-83. This deletes the VDisk-to-host mapping before deleting the VDisk. Important: Deleting a VDisk is a destructive action for user data residing in that VDisk.

Chapter 8. SVC operations using the GUI

513

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-83 Deleting a VDisk: Forcing a deletion

8.5.6 Deleting a VDisk-to-host mapping


To unmap (unassign) a VDisk from a host, perform the following steps: 1. Select the radio button to the left of the VDisk you want to unmap. Select Delete a VDisk-to-host mapping from the list and click Go. 2. In the Deleting a VDisk-to-host mapping window (Figure 8-84), from the Host Name list, select the host from which to unassign the VDisk. Click OK. Tip: Make sure that the host is no longer using that disk. Unmapping a disk from a host will not destroy its contents. Unmapping a disk has the same effect as powering off the computer without first performing a clean shutdown, and thus might leave the data in an inconsistent state. Also, any running application that was using the disk will start to receive I/O errors.

Figure 8-84 Deleting a VDisk-to-host mapping

8.5.7 Expanding a VDisk


Expanding a VDisk presents a larger capacity disk to your operating system. Although you can do this easily using the SVC, you must ensure that your operating system is prepared for it and supports the volume expansion before you use this function. Dynamic expansion of a VDisk is only supported when the VDisk is in use by: AIX 5L V5.2 and above W2K and W2K3 for basic disks W2K and W2K3 with a hot fix from Microsoft (Q327020) for dynamic disks Assuming that your operating system supports it, to expand a VDisk, perform the following steps: 1. Select the radio button to the left of the VDisk you want to expand, as shown in Figure 8-68 on page 505. Select Expand a VDisk from the list and click Go. 2. The Expanding Virtual Disks VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens. See Figure 8-85 on page 515. Follow these steps: 514
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

a. Select the new size of the VDisk. This is the increment to add. For example, if you have a 5 GB disk and you want it to become 10 GB, you specify 5 GB in this field. b. Optionally, select the managed disk candidates from which to obtain the additional capacity. The default for a striped VDisk is to use equal capacity from each MDisk in the MDG. Notes: With sequential VDisks, you must specify the MDisk from which you want to obtain space. There is no support for the expansion of image mode VDisks. If there are not enough extents to expand your VDisk to the specified size, you receive an error message. If you are using VDisk mirroring, all copies must be synchronized before expanding. c. Optionally, you can format the extra space with zeros by selecting the Format Additional Managed Disk Extents check box. This does not format the entire VDisk, just the newly expanded space. When you are done, click OK.

Figure 8-85 Expanding a VDisk

This ends the Expanding a Vdisk procedure.

8.5.8 Assigning a VDisk to a host


When we are mapping a VDisk to a host it does not matter if the host is attached using either an iSCSI or Fibre Channel connection type. The SVC treats the VDisks mapping in the same way for both connection types. Perform the following steps to map a VDisk to a host:

Chapter 8. SVC operations using the GUI

515

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

1. From the SVC Welcome screen (Figure 8-1 on page 470), select the Work with Virtual Disks option and then the Virtual Disks link. In the Viewing Virtual Disks window (Figure 8-86), from the drop-down menu, select Map VDisks to a host from the list and click Go.

Figure 8-86 Assigning VDisks to a host

2. In the window Creating Virtual Disk-to-Host Mappings (Figure 8-87), select the target host. We have the option to specify the SCSI LUN ID. (This field is optional. Use this field to specify an ID for the SCSI LUN. If you do not specify an ID, the next available SCSI LUN ID on the host adapter is automatically used.) Click OK.

Figure 8-87 Creating VDisk-to-Host Mappings window

3. You are presented with an information window that displays the status, as shown in Figure 8-88.

516

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-88 VDisk to host mapping successful

4. You now return to the Viewing Virtual Disks window (Figure 8-86 on page 516). You have now completed all the tasks required to assign a VDisk to an attached host and they are ready for use by the host.

8.5.9 Modifying a VDisk


The Modifying Virtual Disk menu item allows you to rename the VDisk, reassign the VDisk to another I/O group, and set throttling parameters. To modify a VDisk, perform the following steps: 1. Select the radio button to the left of the VDisk you want to modify (Figure 8-66 on page 504). Select Modify a VDisk from the list and click Go. 2. The Modifying Virtual Disk VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens. See Figure 8-89 on page 518. You can perform the following steps separately or in combination: a. Type a new name for your VDisk. Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, and the underscore. It can be between one and 15 characters in length. However, it cannot start with a number or the word VDisk, because this prefix is reserved for SVC assignment only. b. Select an alternate I/O group from the list to alter the I/O group to which it is assigned. c. Set performance throttling for a specific VDisk. In the I/O Governing field, type a number and select either I/O or MB from the list. Note the following items: I/O governing effectively throttles the amount of I/Os per second (or MBs per second) to and from a specific VDisk. You might want to do this if you have a VDisk that has an access pattern that adversely affects the performance of other VDisks on the same set of MDisks, for example, if it uses most of the available bandwidth. If this application is highly important, then migrating the VDisk to another set of MDisks might be advisable. However, in some cases, it is an issue with the I/O profile of the application rather than a measure of its use or importance. The choice between I/O and MB as the I/O governing throttle should be based on the disk access profile of the application. Database applications generally issue large amounts of I/O but only transfer a relatively small amount of data. In this case,

Chapter 8. SVC operations using the GUI

517

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

setting an I/O governing throttle based on MBs per second does not achieve much. It is better for you to use an I/O per second throttle. At the other extreme, a streaming video application generally issues a small amount of I/O, but transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle based on I/Os per second does not achieve much. Therefore, you should use an MB per second throttle. Additionally, you can specify a unit device identifier. The Primary Copy is used to select which VDisk copy is going to be used as the preferred copy for read operations. Mirror Synchronization rate is the I/O governing rate in percentage during initial synchronization. A zero value disables synchronization. The Copy ID section is used for space-efficient VDisks. If you only have a single space-efficient VDisk, the Copy ID drop-down will be greyed out and you can change the warning thresholds and whether the copy will autoexpand. If you have a VDisk mirror and one, or more, of the copies are space-efficient, you can select a copy, or all copies, and change the warning thresholds/autoexpand individually.

Click OK when you have finished making changes.

Figure 8-89 Modifying a VDisk

8.5.10 Migrating a VDisk


To migrate a VDisk, perform the following steps: 1. Select the radio button to the left of the VDisk you want to migrate (Figure 8-66 on page 504). Select Migrate a VDisk from the list and click Go. 2. The Migrating Virtual Disk VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens, as shown in Figure 8-90 on page 519. From the MDisk Group Name list,:

518

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

a. Select the MDG to which you want to reassign the VDisk. You will only be presented with a list of MDisk groups with the same extent size. b. Specify the number of threads to devote to this process (a value from 1 to 4). The optional threads parameter allows you to assign a priority to the migration process. A setting of 4 is the highest priority setting. If you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1. Important: After a migration is started, there is no way to stop it. Migration continues until it is complete unless it is stopped or suspended by an error condition or the VDisk being migrated is deleted. When you have finished making your selections, click OK to begin the migration process. 3. You need to manually refresh your browser or close it and return to the Viewing Virtual Disks window periodically to see the MDisk Group Name column in the Viewing Virtual Disks window update to reflect the new MDG name.

Figure 8-90 Migrating a VDisk

8.5.11 Migrating a VDisk to an image mode VDisk


Migrating a VDisk to an image mode VDisk allows the SVC to be removed from the data path. This might be useful where the SVC is used as a data mover appliance. To migrate a VDisk to an image mode VDisk, the following rules apply: The destination MDisk must be greater than or equal to the size of the VDisk. The MDisk specified as the target must be in an unmanaged state. Regardless of the mode that the VDisk starts in, it is reported as being in managed mode during the migration. Both of the MDisks involved are reported as being in image mode during the migration. If the migration is interrupted by a cluster recovery, or by a cache problem, then the migration will resume after the recovery completes. To accomplish the migration, perform the following steps: 1. Select a VDisk from the list, choose Migrate to an Image Mode VDisk from the drop-down list (Figure 8-91 on page 520), and click Go. 2. The Migrate to Image Mode VDisk wizard launches (not shown here). Read the steps in this window and click Next. 3. Select the radio button to the left of the MDisk where you want the data to be migrated (Figure 8-91 on page 520). Click Next.

Chapter 8. SVC operations using the GUI

519

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-91 Migrate to image mode VDisk wizard: Select the Target MDisk

4. Select the MDG the MDisk will join (Figure 8-92). Click Next.

Figure 8-92 Migrate to image mode VDisk wizard: Select MDG

5. Select the priority of the migration by selecting the number of threads (Figure 8-93). Click Next.

Figure 8-93 Migrate to image mode VDisk wizard: Select the Threads

6. Verify that the information you specified is correct (Figure 8-94). If you are satisfied, click Finish. If you want to change something, use the Back option.

520

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-94 Migrate to image mode VDisk wizard: Verify Migration Attributes

7. Figure 8-95 on page 521 displays the details of the VDisk that you are migrating.

Figure 8-95 Migrate to image mode VDisk wizard: Progress of migration

8.5.12 Creating a VDisk Mirror from an existing VDisk


You can create a mirror of the MDisks from an existing VDisk, that is, it can give you two copies of the underlying disk extents. Note: You can also create a new mirrored VDisk by selecting an option during the VDisk creation as shown in Figure 8-69 on page 506 Any operation that can be done with a VDisk can be done with a VDisk mirror. It is transparent to higher level operations like Metro Mirror, Global Mirror, or FlashCopy. This is not restricted to the same Managed Disk Group, so it makes an ideal method to protect your data from a disk system or an array failure. If one copy of the mirror fails, it will provide continuous data access to the other. When the failed copy is repaired, they will automatically resynchronize. It can also be used as an alternative migration tool, where you can synchronize the mirror before splitting off the original side of the mirror. The VDisk stays online, and can be used normally, while the data is being synchronized. The copies can also be different structures (that is, striped, image, sequential, or space-efficient) and different extent sizes. To create a mirror copy from within a VDisk, perform the following steps;

Chapter 8. SVC operations using the GUI

521

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

1. Select a VDisk from the list, choose Add a Mirrored VDisk Copy from the drop-down list (see Figure 8-66 on page 504), and click Go. 2. The Add Copy to VDisk VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens. See Figure 8-96 on page 522. You can perform the following steps separately or in combination: a. Choose what type of VDisk Copy you want to create, striped or sequential. b. Select the Managed Disk Group you want to put the copy in. We recommend that you choose a different group to maintain higher availability. c. Select the Select MDisk(s) manually button, which will expand the section that has a list of MDisks that are available for adding. d. Choose the Mirror synchronization rate. This is the I/O governing rate in percentage during initial synchronization. A zero value disables synchronization. You can also select Synchronized, but this should be used only when the VDisk has never been used or is going to be formatted by the host. e. You can make the copy to be space-efficient. This section will expand, giving you options to allocate the virtual size, warning thresholds, autoexpansion, and Grain size. See 8.5.4, Creating a space-efficient VDisk with auto-expand on page 509 for more information. f. Optionally, format the new VDisk by selecting the Format the new VDisk copy and mark the VDisk synchronized check box. Use this option with care, because if the primary copy goes offline, you may not have the data replicated on the other copy. g. Click OK.

Figure 8-96 Add Copy to VDisk window

You can monitor the MDisk copy synchronization progress from the Manage Progress menu option and then the View Progress link.

522

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.5.13 Creating a mirrored VDisk


In this section, we are going to create a mirrored VDisk step-by-step. This provides a highly available VDisk. Refer to 8.5.3, Creating a VDisk on page 505, perform steps 1 to 4, and then do the following: 1. In the Set Attributes window (Figure 8-97), follow these steps: a. Select the type of VDisk to create (striped or sequential) from the list. b. Select the cache mode (read/write, none) from the list. c. Select a Unit device identifier (numerical number) for this VDisk. d. Select the number of VDisks to create. e. Select the Mirrored Disk check box. Some mirror disk options will appear. f. Type the Mirror Synchronization rate, in percent. It is set to 50% by default. g. Optionally, you can check the Synchronized check box. Select this option when MDisks are already formatted or when read stability to unwritten areas of the VDisk is not required. h. Click Next.

Figure 8-97 Select the attributes for the VDisk

2. In the window Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0), shown in Figure 8-98, follow these steps: a. Select the managed disk group from the list. b. Type the capacity of the VDisk. Select the unit of capacity from the list. c. Click Next.

Chapter 8. SVC operations using the GUI

523

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-98 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0) window.

3. In the window Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1), shown in Figure 8-99, select a managed disk group for Copy 1 of the mirror. This can be defined within the same or on a different MDG. Click Next.

Figure 8-99 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window

4. In the window Name the VDisk(s) (Figure 8-100 on page 524), type a name for the virtual disk you are creating. In this case, we used MirrorVDisk1. Click Next.

Figure 8-100 Name the VDisk(s) window

524

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

5. In the Verify Mirrored VDisk window (Figure 8-101), verify the selections. We can select the Back button at any time to make changes.

Figure 8-101 Verifying Mirrored VDisk Attributes window

6. After selecting the Finish option, we are presented with the window shown in Figure 8-102, which tells us the result of the action.

Figure 8-102 Mirrored VDisk creation success

We click Close again and by clicking on our newly created VDisk we can see the more detailed information about that VDisk as shown in (Figure 8-103).

Chapter 8. SVC operations using the GUI

525

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-103 List of created mirrored VDisks

8.5.14 Creating a VDisk in image mode


An image mode disk is a VDisk that has an exact one-to-one (1:1) mapping of VDisk extents with the underlying MDisk. For example, extent 0 on the VDisk contains the same data as extent 1 on the MDisk and so on. Without this (1:1) mapping (for example, if extent 0 on the VDisk mapped to extent 3 on the MDisk), there is little chance that the data on a newly introduced MDisk is still readable. Image mode is intended for the purpose of migrating data from an environment without the SVC to an environment with the SVC. A LUN that was previously directly assigned to a SAN-attached host can now be reassigned to the SVC (during a short outage) and returned to the same host as an image mode VDisk, with the users data intact. During the same outage, the host, cables, and zones can be reconfigured to access the disk, now through the SVC. After access is re-established, the host workload can resume while the SVC manages the transparent migration of the data to other SVC managed MDisks on the same or another disk subsystem. We recommend that, during the migration phase of the SVC implementation, you add one image mode VDisk at a time to the SVC environment. This reduces the possibility of error. It also means that the short outages required to reassign the LUNs from the subsystem or subsystems and reconfigure the SAN and host can be staggered over a period of time to minimize the business impact. Since SVC version 4.3 we have the ability to create a VDisk mirror or a space-efficient VDisk while you are creating an image mode VDisk. Using the mirroring option while making the image mode VDisk could be used as a storage array migration tool, as the Copy1 MDisk will also be in image mode. To create a space-efficient image mode VDisk, you need the same amount of real disk space as the original MDisk. This is because the SVC is unable to detect how much physical space a host is utilizing on a LUN.

526

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Important: You can create an image mode VDisk only by using an unmanaged disk, that is, you must do this before you add the MDisk that corresponds to your original logical volume to a Managed Disk Group. To create an image mode VDisk, perform the following steps: 1. From the My work panel on the left side of your GUI select, work with virtual disks. 2. From the Work with Virtual disk select Virtual Disks. 3. From the drop down menu select Create Image Mode VDisk 4. From the overview for creation of Image Mode Vdisk select Next. 5. The Set attributes window should appear (Figure 7 on page 528), where you enter the name of the VDisk you want to create.

Figure 8-104 Set attributes for the VDisk

6. You can also select whether you want to have read and write operations stored in cache by specifying a cache mode. Additionally, you can specify a unit device identifier. You can optionally choose to have it as a mirrored or Space-efficient VDisk. Click Next to continue. Attention: You must specify the cache mode when you create the VDisk. After the VDisk is created, you cannot change the cache mode. a. We describe the VDisk cache modes in Table 8-1.
Table 8-1 VDisk cache modes Read/Write None All read and write I/O operations that are performed by the VDisk are stored in cache. This is the default cache mode for all VDisks. All read and write I/O operations that are performed by the VDisk are not stored in cache.

Chapter 8. SVC operations using the GUI

527

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: If you do not provide a name, the SVC automatically generates the name VDiskX (where X is the ID sequence number assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9, a dash, and the underscore. It can be between one and 15 characters in length, but cannot start with a number, a dash, or the word VDisk, because this prefix is reserved for SVC assignment only. 7. Next you choose your MDisk to use for your Image Mode VDisk as shown in Figure 8-105.

Figure 8-105 Select your MDisk to use for your Image Mode VDisk

8. Next select your preferred I/O group or have the system choose for you, as show in Figure 8-106.

Figure 8-106 Select the I/O group or preferred Node

9. Figure 8-107 shows you the characteristics of the new image VDisk. Click Finish to complete this task.

528

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-107 Verify image VDisk attributes

You can now map the newly created VDisk to your host.

8.5.15 Creating an image mode mirrored VDisk


This procedure defines a mirror copy to the image mode VDisk creation process. The second copy (Copy1) will also be an image mode MDisk. This could be used as a storage array migration tool, using the SVC as the data mover. 1. From the My work panel on the left side of your GUI select, Work with Virtual Disks 2. From the Work with Virtual disk select Virtual Disks 3. From the drop down menu select Create Image Mode VDisk 4. After selecting next on the overview page you will see the attribute selection page as shown in Figure 8-108 on page 530. a. Enter the name of the VDisk you want to create. b. Select the Mirrored Disk check box and a sub-section expands. The mirror synchronization rate is a percentage of the peak rate. The synchronized option is only when the original disk is unused (or going to be otherwise formatted by the host).

Chapter 8. SVC operations using the GUI

529

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-108 Set attributes for the VDisk

5. Figure 8-109 enables you to choose on which of the available MDisks your Copy 0 and Copy 1 will be stored. Notice that we have selected a second MDisk that is larger than the original. Click Next to proceed.

Figure 8-109 Select MDisks

6. Now you can optionally select an I/O group and preferred node, and you can select an MDG for each of the MDisk copies, as shown in Figure 8-110. Click Next to proceed.

530

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-110 Choose an I/O group and an MDG for each of the MDisk copies

7. Figure 8-111 on page 531 shows you the characteristics of the new image mode VDisk. Click Finish to complete this task.

Figure 8-111 Verify imaged VDisk attributes

You can monitor the MDisk copy synchronization progress by selecting the Manage Progress option and then the View Progress link, as shown in Figure 8-112.

Chapter 8. SVC operations using the GUI

531

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-112 VDisk copy synchronization status.

You have the option of assigning the VDisk to the host or waiting until it is synchronized and, after deleting the MDisk mirror Copy 1, map the MDisk copy to the host.

8.5.16 Migrating to a space-efficient VDisk using VDisk mirroring


In this scenario, we are going to migrate from a fully allocated (or an image mode) VDisk to a space-efficient VDisk using VDisk mirroring. We repeat the procedure as described and shown in Creating a VDisk Mirror from an existing VDisk on page 521, but here we will be selecting Space-efficient VDisk as the mirrored copy. 1. Select a VDisk from the list, choose Add a Mirrored VDisk Copy from the drop-down list (see Figure 8-66 on page 504), and click Go. 2. The Add Copy to VDisk VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens. See Figure 8-113 on page 533. You can perform the following steps separately or in combination: a. Choose the type of VDisk Copy you want to create, striped or sequential. b. Select the Managed Disk Group you want to put the copy in. c. Select the Select MDisk(s) manually button, which will expand the section with a list of MDisks that are available for adding. d. Choose the Mirror synchronization rate. This is the I/O governing rate in percentage during initial synchronization. A zero value disables synchronization. You can also select Synchronized, but this should be used only when the VDisk has never been used or is going to be formatted by the host. e. Select Space-efficient. This section will expand, and you should do the following: i. Type 100 in the % box for the real size to initially allocate. The SVC will see Copy 0 as 100% utilized, so Copy 1 must be defined as the same size. ii. Uncheck the Warn when used capacity of VDisk reaches check box. iii. Check Auto expand. iv. Set the Grain size. See Chapter 8.5.4, Creating a space-efficient VDisk with auto-expand on page 509 for more information. f. Click OK.

532

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-113 Add a space-efficient Copy to VDisk window

You can monitor the VDisk copy synchronization progress from the Manage Progress menu option and then the View Progress link, as shown in Figure 8-114 on page 534.

Chapter 8. SVC operations using the GUI

533

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-114 Two VDisk copies ongoing in the system

8.5.17 Deleting a VDisk Copy from a VDisk mirror


Once the VDisk copy has finished synchronizing, you can remove the original VDisk copy (Copy 0). 1. In the Viewing Virtual Disks window, select the mirrored VDisk from the list, choose Delete a Mirrored VDisk Copy from the drop-down list (Figure 8-115), and click Go.

Figure 8-115 Viewing Virtual Disks - Deleting a mirrored VDisk copy

2. Figure 8-116 displays both copies of the VDisk mirror. Select the radio button of the original copy (Copy ID 0) and click OK.

534

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-116 Deleting VDisk Copy 0

The VDisk is now a single Space-efficient VDisk copy. To migrate an SE VDisk to a fully allocated VDisk, follow the same scenario, but add a normal (fully allocated) VDisk as the second copy.

8.5.18 Splitting a VDisk Copy


To split off a synchronized VDisk copy to a new VDisk, perform the following steps; 1. Select a mirrored VDisk from the list, choose Split a VDisk Copy from the drop-down list (Figure 8-66 on page 504), and click Go. 2. The Split a Copy from VDisk VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens (See Figure 8-117 on page 536). Do the following steps: a. Select which copy you wish to split. a. Type a name for the new VDisk. b. You can optionally force split the copies even if the copy is not synchronized. This may mean the split copy will not be point-in-time consistent. c. Choose an I/O group and then a preferred node. In our case, we let the system choose. d. Select the cache mode, Read/Write or None. e. If you want, enter a unit device identifier. f. Click OK.

Chapter 8. SVC operations using the GUI

535

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-117 Split a VDisk Copy window

This new VDisk is available to be mapped to a host. Note: Once you split a VDisk mirror, you cannot resynchronize or recombine them. You must create a VDisk copy from scratch.

8.5.19 Shrinking a VDisk


The method that the SVC uses to shrink a VDisk is to remove the required number of extents from the end of the VDisk. Depending on where the data actually resides on the VDisk, this can be quite destructive. For example, you might have a VDisk that consists of 128 extents (0 to 127) of 16 MB (2 GB capacity) and you want to decrease the capacity to 64 extents (1 GB capacity). In this case, the SVC simply removes extents 64 to 127. Depending on the operating system, there is no easy way to ensure that your data resides entirely on extents 0 through 63, so be aware that you might lose data. Although easily done using the SVC, you must ensure that your operating system supports shrinking, either natively or by using third-party tools, before using this function. In addition, we recommend that you always have a good current backup before you execute this task. Shrinking a VDisk is useful in certain circumstances, such as: Reducing the size of a candidate target VDisk of a copy relationship to make it the same size as the source. Releasing space from VDisks to have free extents in the MDG, provided you do not use that space any more and take precautions with the remaining data, as explained earlier. Assuming your operating system supports it, perform the following steps to shrink a VDisk:

536

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

1. Perform any necessary steps on your host to ensure that you are not using the space you are about to remove. 2. Select the radio button to the left of the VDisk you want to shrink (Figure 8-66 on page 504). Select Shrink a VDisk from the list and click Go. 3. The Shrinking Virtual Disks VDiskname window (where VDiskname is the VDisk you selected in the previous step) opens, as shown in Figure 8-118. In the Reduce Capacity By field, enter the capacity you want to reduce. Select B, KB, MB, GB, TB, or PB. The final capacity of the VDisk is the Current Capacity minus the capacity that you specify. Note: Be careful with the capacity information. The Current Capacity field shows it in MBs, while you can specify a capacity to reduce in GBs. SVC calculates 1 GB as being 1024 MB. When you are done, click OK. The changes should become visible on your host.

Figure 8-118 Shrinking a VDisk

8.5.20 Showing the MDisks used by a VDisk


To show the MDisks that are used by a specific VDisk, perform the following steps: 1. Select the radio button to the left of the VDisk you want to view MDisk information about (Figure 8-66 on page 504). Select Show MDisks This VDisk is Using from the list and click Go. 2. You will see a subset (specific to the VDisk you chose in the previous step) of the Viewing Managed Disks window (Figure 8-119 on page 537).

Figure 8-119 Showing MDisks used by a VDisk

Chapter 8. SVC operations using the GUI

537

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

For information about what you can do in this window, see 8.2.4, Managed disks on page 479.

8.5.21 Showing the MDG to which a VDisk belongs


To show the MDG to which a specific VDisk belongs, perform the following steps: 1. Select the radio button to the left of the VDisk you want to view MDG information about (Figure 8-66 on page 504). Select Show MDisk Group This Vdisk Belongs To from the list and click Go. 2. You will see a subset (specific to the VDisk you chose in the previous step) of the Viewing Managed Disk Groups Belonging to VDiskname window (Figure 8-120).

Figure 8-120 Showing an MDG for a VDisk

8.5.22 Showing the host to which the VDisk is mapped


To show the host to which a specific VDisk belongs, select the radio button to the left of the VDisk you want to view MDG information about (Figure 8-66 on page 504). Select Show Hosts This VDisk is Mapped To from the list and click Go. This shows you the Host to which the VDisk is attached (Figure 8-121 on page 538). Alternatively, you can use the procedure described in Showing VDisks mapped to a particular host on page 539 to see all VDisk to Host mappings.

Figure 8-121 Show host to VDisk mapping.

8.5.23 Showing capacity information


To show the capacity information of the cluster, perform the following steps. From the VDisk overview drop down select Show Capacity information as shown in Figure 8-122.

538

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-122 Selecting capacity information for a VDisk

Figure 8-123 shows you the total MDisk capacity, the space in the MDGs, the space allocated to the VDisks, and the total free space.

Figure 8-123 Show capacity information

8.5.24 Showing VDisks mapped to a particular host


To show the VDisks assigned to a specific host, perform the following steps: 1. From the SVC Welcome screen, click the Work with Virtual Disks option and then the Virtual Disk to Host Mappings link (Figure 8-124).

Figure 8-124 VDisk to host mapping

Chapter 8. SVC operations using the GUI

539

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

2. Now you can see which host that VDisk belongs to. If this is a long list, you can use the Additional Filtering and Sort option from 8.7.1, Organizing on screen content on page 543.

8.5.25 Deleting VDisks from a host


From the same screen where you can view the VDisk to Host Mapping (Figure 8-124), you can also delete a mapping. Select the radio button to the left of the host and VDisk combination you want to delete. Ensure that Delete a Mapping is selected from the list. Click Go. 1. Confirm the selection you made in Figure 8-125 by clicking the Delete button.

Figure 8-125 Deleting VDisk to Host mapping

2. Now you are back at the window shown in Figure 8-124. Now you can assign this VDisk to another Host, as described in 8.5.8, Assigning a VDisk to a host on page 515. You have now completed the tasks required to manage VDisks within an SVC environment.

8.6 Working with SSD drives


In SVC solid state drives (SSD) are introduced as part of each SVC node. During our operational work on the SVC cluster it is necessary to know how to identify where our SSD drives are located and how they are configured. This section will describe the basic operations task containing the SSD drives. Note that storing the quorum disk on SSD is not supported. More detailed information about SSD drives and internal controllers is in 2.5, Solid State Drives on page 49.

8.6.1 SSD introduction


If you have SSD drives installed in your node they will appear as unmanaged MDisks, controlled by an internal controller. This controller is only used for the SSD drives and each controller is dedicated to a single node, therefore we can have 8 internal controllers in a single cluster configuration. Those controllers are automatically assigned as owners of the SSD drives and the controllers have the same WWNN as the node they belong to. An internal controller is identified as shown in Figure 8-126.

540

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-126 SVC internal controller

The unmanaged MDisks (SSD drives) are owned by the internal controllers. When these MDisks are added to an MDG it is recommended that a dedicated MDG is created for the SSD drives. When those MDisks are added to an MDG they will become managed and will be treated as any other MDisks in an MDG. If we look closer at one of the selected controllers, as shown in Figure 8-127 on page 541, we can verify the SVC node that owns this controller, and that this is an internal SVC controller.

Figure 8-127 Shows internal SSD controller

We can now check what MDisks (sourced from our SSD drives) are provisioned from that controller as shown in Figure 8-128.

Figure 8-128 Our SSD drives

Chapter 8. SVC operations using the GUI

541

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

From this view we can see all the relevant information such as the status, the MDG and the size. To see more detailed information about a single MDisk (single SSD), we do this by clicking on a single MDisk and we will see the information as shown in Figure 8-129 on page 542.

Figure 8-129 Showing details for an SSD MDisk.

Notice the controller id (6) this is an identifier for the internal controller type. When you have your SSDs in full operation and you would like to see the VDisks that are using your SSDs, the easiest way is to locate the MDG that contains your SSD drives as MDisks, and select Show VDisks Using This Group as shown in Figure 8-130.

Figure 8-130 Showing VDisks using our SSD drives

This displays the VDisks that are using your SSD drives.

542

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.7 SVC Advanced operations using the GUI


In the topics that follow we describe those activities that we feel are more advanced.

8.7.1 Organizing on screen content


In 8.1.1, Organizing on screen content on page 470 are detailed information about how you can filter and sort the content displayed in your GUI. If you need to access the online help, in the upper right corner of the window, click the icon. This opens a new window called Information Center. Here you can search on any item you want help for (see Figure 8-131 on page 543).

Figure 8-131 Online help using the ? icon

General housekeeping
If at any time the content in the right side of the frame is abbreviated, you can collapse the My Work column by clicking the icon at the top of the My Work column. When collapsed, the small arrow changes from pointing to the left to pointing to the right ( ). Clicking the small arrow that points right expands the My Work column back to its original size. In addition, each time you open a configuration or administration window using the GUI in the following sections, it creates a link for that window along the top of your Web browser beneath the main banner graphic. As a general housekeeping task, we recommend that you close each window when you finish using it by clicking the icon to the right of the window name, but below the icon. Be careful not to close the entire browser.

Chapter 8. SVC operations using the GUI

543

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.8 Managing the cluster using the GUI


This section explains the various configuration and administration tasks that you can perform on the cluster.

8.8.1 Viewing cluster properties


Perform the following steps to display the cluster properties: 1. From the SVC Welcome screen, select the Manage Cluster option and then the View Cluster Properties link. 2. The Viewing General Properties window (Figure 8-132) opens. Click the IP Addresses, Remote Authentication, Space, Statistics, Metro & Global Mirror, iSCSI, SNMP, Syslog, E-mail server and E-mail user links and you will see additional information about how your cluster is configured.

Figure 8-132 View Cluster Properties: General Properties

8.8.2 Modifying IP addresses


In SVC 5.1 one of the new functionality enables us to use both IP ports of each node. That means that now there are two active cluster ports on each node. This is described in further detail in 2.2.11, Usage of IP Addresses and Ethernet ports on page 28. If the cluster IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address if connected through that port. In this section, we discuss the modification of IP addresses. Important: If you specify a new cluster IP address, the existing communication with the cluster through the GUI is broken. You need to relaunch the SAN Volume Controller Application from the GUI Welcome screen. Modifying the IP address of the cluster, although quite simple, requires some reconfiguration for other items within the SVC environments. This includes reconfiguring the central administration GUI by re-adding the cluster with its new IP address. Perform the following steps to modify the cluster and service IP addresses of our SVC configuration:

544

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

1. From the SVC Welcome screen, select the Manage Cluster option and the Modify IP Addresses link. 2. The Modify IP Addresses window (Figure 8-133) opens.

Figure 8-133 Modify cluster IP address

Select the port you want to modify and select Modify Port Setting and click GO. Notice that you can configure both ports on the SVC node as shown in Figure 8-134.

Figure 8-134 Modify cluster IP

Chapter 8. SVC operations using the GUI

545

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

We enter the new information as shown in Figure 8-135 on page 546.

Figure 8-135 Entering new cluster IP

3. You advance to the next window, which shows a message indicating that the IP addresses were updated. You have now completed the tasks required to change the IP addresses (cluster, service, gateway, and master console) for your SVC environment.

8.8.3 Starting the statistics collection


Perform the following steps to start statistics collection on your cluster: 1. From the SVC Welcome screen, select the Manage Cluster option and the Start Statistics Collection link. 2. The Starting the Collection of Statistics window (Figure 8-136) opens. Make an interval change, if desired. The interval you specify (minimum 1, maximum 60) is in minutes. Click OK.

Figure 8-136 Starting the Collection of Statistics

3. Although it does not state the current status, clicking OK turns on the statistics collection. To verify, click the Cluster Properties link, as you did in 8.8.1, Viewing cluster properties on page 544. Then click the Statistics link. You see the interval as specified in Step 2 and the status of On, as shown in Figure 8-137.

546

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-137 Verifying that statistics collection is on

You have now completed the tasks required to start statistics collection on your cluster.

8.8.4 Stopping the statistics collection


Perform the following steps to stop statistics collection on your cluster: 1. From the SVC Welcome screen, select the Manage Cluster option and the Stop Statistics Collection link. 2. The Stopping the Collection of Statistics window (Figure 8-138) opens, and you see a message asking whether you are sure that you want to stop the statistics collection. Click Yes to stop the ongoing task.

Figure 8-138 Stopping the collection of statistics

3. The window closes. To verify that the collection has stopped, click the Cluster Properties link, as you did in 8.8.1, Viewing cluster properties on page 544. Then click the Statistics link. Now you see the status has changed to Off, as shown in Figure 8-139 on page 548.

Chapter 8. SVC operations using the GUI

547

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-139 Verifying that statistics collection is off

You have now completed the tasks required to stop statistics collection on your cluster.

8.8.5 Metro Mirror and Global Mirror


From the Manage Cluster window we can see how Metro Mirror or Global Mirror is configured. in Figure 8-140 on page 548 we can see the overview of partnership properties and which clusters are currently in partnership with our cluster.

Figure 8-140 Metro Mirror and Global Mirror overview

8.8.6 iSCSI
From the View Cluster Properties screen we can select the iSCSI overview and this shows if you have configured SNS sever, CHAP and what authentication is supported. This is shown in Figure 8-141.

548

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-141 iSCSI overview from cluster properties window

8.8.7 Setting the cluster time and configuring NTP server.


Perform the following steps to configure time settings: 1. From the SVC Welcome screen, select the Manage Cluster options and the Set Cluster Time link. 2. The Cluster Date and Time Settings window (Figure 8-142 on page 550) opens. At the top of the window, you can see the current settings. 3. If you are using an NTP server you enter the IP address of the NTP server and select Set NTP Server, and from now on the cluster will use that servers settings as its time reference. 4. If it is necessary to perform any changes of the cluster time select Update Cluster Time

Chapter 8. SVC operations using the GUI

549

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-142 Changing cluster date and time.

You have now completed the tasks necessary to configure an NTP server and set the cluster time zone and time.

8.8.8 Shutting down a cluster


If all input power to a SAN Volume Controller cluster is removed for more than a few minutes (for example, if the machine room power is shut down for maintenance), it is important that you shut down the cluster before you remove the power. Shutting down the cluster while still connected to the main power will ensure that the UPS batteries are still fully charged (when power is restored). If you remove the main power while the cluster is still running, the UPS will detect the loss of power and instruct the nodes to shut down. This can take several minutes to complete, and although the UPS has sufficient power to do this, you will be unnecessarily draining the UPS batteries. When power is restored, the SVC nodes will start, however, one of the first checks they make is to ensure that the UPSs batteries have sufficient power to survive another power failure, enabling the node to perform a clean shutdown. (We do not want the UPS to run out of power while the nodes shutdown activities have not yet completed!) If the UPSs batteries are not fully charged enough, the node will not start. It can take up to three hours to charge the batteries sufficiently for a node to start.

550

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Note: When a node shuts down due to loss of power, it will dump the cache to an internal hard drive so the cache data can be retrieved when the cluster starts. With the 8F2/8G4 nodes, the cache is 8 GB, and as such, it can take several minutes to dump to the internal drive. SVC UPSs are designed to survive at least two power failures in a short time, before nodes will refuse to start until the batteries have sufficient power (to survive another immediate power failure). If, during your maintenance activities, the UPS detected power and power-loss more than once (and thus the nodes start and shut down more than once in a short time frame), you might find that you have unknowingly drained the UPS batteries, and have to wait until they are charged sufficiently before the nodes will start. Perform the following steps to shut down your cluster: Important: Before shutting down a cluster, you should quiesce all I/O operations that are destined for this cluster because you will lose access to all VDisks being provided by this cluster. Failure to do so might result in failed I/O operations being reported to your host operating systems. There is no need to do this if you are only shutting down one SVC node. Begin the process of quiescing all I/O to the cluster by stopping the applications on your hosts that are using the VDisks provided by the cluster. If you are unsure which hosts are using the VDisks provided by the cluster, follow the procedure in 8.5.22, Showing the host to which the VDisk is mapped on page 538, and repeat this for all VDisks. 1. From the SVC Welcome screen, select the Manage Cluster option and the Shut Down Cluster link. 2. The Shutting Down cluster window (Figure 8-143) opens. You will get a message asking you to confirm whether you want to shut down the cluster. Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data migration operations, and forced deletions before continuing. Click Yes to begin the shutdown process. Note: At this point, you will lose administrative contact with your cluster.

Figure 8-143 Shutting down the cluster

You have now completed the tasks required to shut down the cluster. Now you can shut down the uninterruptible power supplies by pressing the power buttons on their front panels.

Chapter 8. SVC operations using the GUI

551

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Tip: When you shut down the cluster, it will not automatically start, and will have to be manually started. If the cluster shuts down because the UPS has detected a loss of power, it will automatically restart when the UPS has detected the power has been restored (and the batteries have sufficient power to survive another immediate power failure).

Note: To restart the SVC cluster, you must first restart the uninterruptible power supply units by pressing the power buttons on their front panels. After they are on, go to the service panel of one of the nodes within your SVC cluster and press the power on button, releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the SVC front panel), you can start the other nodes in the same way. As soon as all nodes are fully booted and you have re-established administrative contact using the GUI, your cluster is fully operational again.

8.9 Manage authentication


Users are managed from within the Manage Authentication window in the SAN Volume Controller console GUI (see Figure 8-146 on page 554). Each user account has a name, a role, and password assigned to it. This differs from the ssh-key based role approach used by the CLI. The authentication part is described in details in 2.3.5, User Authentication on page 40. The role-based security feature organizes the SVC administrative functions into groups, known as roles, so that permissions to execute the various functions can be granted differently to the different administrative users. There are four main roles and one special one. The user roles are shown in Table 8-2.
Table 8-2 Authority roles User group Security Admin Administrator Role All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, setpwdreset User Superusers Administrators that control the SVC.

552

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

User group Copy Operator

Role All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, settime All svcinfo commands. svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser svcconfig: backup

User For those that control all copy functionality of the cluster.

Service

Is for those that perform service maintenance and other hardware tasks on the cluster.

Monitor

For those only needing view access.

The superuser user is a built-in account that has the Security Admin user role permissions. You cannot change permissions or delete this account, only change the password, and this password can also be changed manually on the front panel of the cluster nodes.

8.9.1 Modify current user


From the SVC Welcome screen, select the Manage authentication option in the My Work pane, and select Modify Current User as shown in Figure 8-144.

Figure 8-144 Modifying current user

Chapter 8. SVC operations using the GUI

553

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Towards the top of the screen you can see the name of the user you are modifying and we enter our new password as shown in Figure 8-145.

Figure 8-145 Changing password for current user

8.9.2 Creating a user


Perform the following steps to view and create a user: 1. From the SVC Welcome screen, select the Users option in the My Work pane, as shown in Figure 8-146 on page 554.

Figure 8-146 Viewing users

2. Select User from My work window, and select Create a User from the drop down menu, as shown in Figure 8-147.

Figure 8-147 Create a user.

3. Enter a name for your user and the desired password. Since we are not connected to an LDAP server we select local authentication and we can therefore choose which user group our user belongs to. In our scenario we are creating a user for SAN administrative purposes, and it is therefore appropriate to add this user to the Administrator group. We

554

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

attach the SSH key as well so a CLI session can be opened as well. We view the attributes as shown in Figure 8-148 on page 555.

Figure 8-148 Attributes for new user called qwerty created

And we see the result of our creation in Figure 8-149.

Figure 8-149 Overview of created users

8.9.3 Modifying a user role


Perform the following steps to modify a role: 1. Select the radio button to the left of the user, as shown in Figure 8-150, to change the assigned role. Select Modify a User from the drop-down menu and click Go.

Figure 8-150 Modify a User

Chapter 8. SVC operations using the GUI

555

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

2. You have the option of changing the password, assigning a new role or changing the SSH key for the given user name. Click OK (Figure 8-151).

Figure 8-151 Modifying a user window

8.9.4 Deleting a user role


Perform the following steps to delete a user role. 1. Select the radio button to the left of the user(s) that you want to delete. Select Delete a User from the drop-down list (Figure 8-152) and click Go.

Figure 8-152 Delete a user

2. Click OK to confirm that you want to delete the user, as shown in Figure 8-153.

Figure 8-153 Confirming deleting a user

556

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.9.5 User Groups


We have several options to change and modify our user groups. We have five roles to assign to our user groups. Those roles can not be modified but a new user group can be created and linked to an already configured role. In Figure 8-154 we select to Create a Group.

Figure 8-154 Create a new user group

Here we have several options for our user group and we find detailed information about the groups available. In Figure 8-155 we can see the options and these are the same options we are presented with when we select Modify User group.

Figure 8-155 Create user group or modify a user group

8.9.6 Cluster password


To change the cluster password, select Manage authentication from the Welcome screen and from there select Cluster Passwords as shown in Figure 8-156.

Figure 8-156 Change cluster password

Chapter 8. SVC operations using the GUI

557

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.9.7 Remote authentication.


To enable remote authentication using LDAP we need to configure our SVC cluster for that purpose. That is done by selecting Manage authentication from My work and from there we select Remote Authentication as shown in Figure 8-157.

Figure 8-157 Remote authentication

We have now completed the tasks required to create, modify, and delete a user and user groups within the SVC cluster.

8.10 Working with nodes using the GUI


This section discusses the various configuration and administration tasks that you can perform on the nodes within an SVC cluster.

8.10.1 I/O groups


This section details the tasks that can be performed at an I/O group level.

8.10.2 Renaming an I/O group


Perform the following steps to rename an I/O group: 1. From the SVC Welcome screen, select the Work with Nodes option and the I/O Groups link. 2. The Viewing Input/Output Groups window (Figure 8-158) opens. Select the radio button to the left of the I/O group you want to rename. In this case, we select io_grp1. Ensure that Rename an I/O Group is selected from the drop-down list. Click Go.

Figure 8-158 Viewing I/O groups

558

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

3. On the Renaming I/O Group window (I/O Group Name is the I/O group you selected in the previous step), type the New Name you want to assign to the I/O group. Click OK, as shown in Figure 8-159. Our new name is IO_grp_SVC02.

Figure 8-159 Renaming the I/O group

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and the underscore. It can be between one and 15 characters in length, but cannot start with a number, the dash, or the word iogrp, because this prefix is reserved for SVC assignment only. SVC also uses io_grp as a reserve word prefix. A node name cannot therefore be changed to io_grpN where N is a numeric; however, io_grpNy or io_grpyN, where y is any non-numeric character used in conjunction with N, is acceptable. We have now completed the tasks required to rename an I/O group.

8.10.3 Adding nodes to the cluster


After cluster creation is completed through the service window (the front window of one of the SVC nodes) and cluster Web interface, only one node (the configuration node) is set up. To be a fully functional SVC cluster, at least a second node must be added to the configuration. Perform the following steps to add nodes to the cluster: 1. Open the GUI using one of the following methods: Double-click the SAN Volume Controller Console icon on your SSPC desktop. Open a Web browser on the SSPC console and point to this address: http://localhost:9080/ica Open a Web browser on a separate workstation and point to this address: http://sspcconsoleipaddress:9080/ica The GUI Welcome screen appears, as shown in Figure 8-160 and we select Clusters from the My work window.

Chapter 8. SVC operations using the GUI

559

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-160 GUI Signon window

2. The Viewing cluster window appears, as shown in Figure 8-161. This window contains several useful links and information: My Work (top left), the GUI version and build level information (right, under the main graphic), and a hypertext link to the SVC download page: http://www.ibm.com/storage/support/2145 3. On the Viewing Clusters window (Figure 8-161), select the radio button next to the cluster on which you want to perform actions (in our case, ITSO_CLS3) click Go.

Figure 8-161 Launch the SAN Volume Controller application

4. The SAN Volume Controller Console Application launches in a separate browser window (Figure 8-162). In this window, as with the Welcome screen, you can see several links under My Work (top left), a Recent Tasks list (bottom left), the SVC Console version and build level information (right, under main graphic), and a hypertext link that will take you to the SVC download page: http://www.ibm.com/storage/support/2145 Under My Work, click the Work with Nodes option and then the Nodes link.

560

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-162 SVC Console Welcome screen

5. The Viewing Nodes window (Figure 8-163) opens. Note the input/output (I/O) group name (for example, io_grp0). Select the node you want to add. Ensure that Add a node is selected from the drop-down list and click Go.

Figure 8-163 Viewing Nodes

Note: You can rename the existing node to your own naming convention standards (we show how to do this later). In your window, it should appear as node1 by default. 6. The next window (Figure 8-164) displays the available nodes. Select the node from the Available Candidate Nodes drop-down list. Associate it with an I/O group and provide a name (for example, SVCNode2). Click OK.

Chapter 8. SVC operations using the GUI

561

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-164 Adding a Node to a Cluster window

Note: If you do not provide a name, the SVC automatically generates the name nodeX, where X is the ID sequence number assigned by the SVC internally. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. It can be between one and 15 characters in length, but cannot start with a number or the word node, since this prefix is reserved for SVC assignment only. In our case, we only have enough nodes to complete the formation of one I/O group. Therefore, we added our new node to the I/O group that node1 was already using, namely io_grp0 (you can rename from the default of iogrp0 using your own naming convention standards). If this window does not display any available nodes (indicated by the message CMMVC1100I There are no candidate nodes available), check if your second node is powered on and that zones are appropriately configured in your switches. It is also possible that a pre-existing clusters configuration data is stored on it. If you are sure this node is not part of another active SVC cluster, use the service window to delete the existing cluster information. When this is complete, return to this window and you should see the node listed. 7. Return to the Viewing Nodes window (Figure 8-165). It shows the status change of the node from Adding to Online.

562

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-165 Node added and currently with status adding

Note: This window does not automatically refresh. Therefore, you continue to see the Adding status only until you click the Refresh button.

We have completed the cluster configuration and now you have a fully redundant SVC environment.

8.10.4 Configuring iSCSI ports


In this topic we will show how to configure our cluster for use with iSCSI. We will configure our nodes to use the primary and secondary ethernet ports for iSCSI as well as containing the cluster IP. When we are configuring our nodes to be used with iSCSI we are not impacting our cluster IP. The cluster IP is changed as show in Chapter 8.8, Managing the cluster using the GUI on page 544 It is important to know that we can have more than a one IP address to one physical connection relationship. We have the possibility to have a four to one relationship (4:1) consisting of two IPv4 plus two IPv6 addresses (4 total), to one physical connection per port per node. Note: When doing reconfiguration of IP ports be aware of that already configured iSCSI connections will need to reconnect if changes are made on the IPs of the nodes. iSCSI authentication or CHAP can be done in two ways, either for the whole cluster itself or per host connection. If the CHAP is to be configured for the whole cluster this is shown in 8.8.6, iSCSI on page 548. In our scenario we have a cluster IP of 9.64.210.64 as shown in Figure 8-166 and that will not be impacted during our configuration of the nodes IP addresses.

Chapter 8. SVC operations using the GUI

563

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-166 Cluster IP address shown

We start by selecting Work with nodes from our Welcome Window and select Node Ethernet Ports as shown in Figure 8-167 on page 564.

Figure 8-167 Configuring node ethernet ports

We can see that we have four (two per node) connections to use and they are all physically connected with a 100 Mb link but they are not configured yet. From the drop-down menu we select Configure a Node Ethernet Port and insert the IP that we intend to use for iSCSI as shown in Figure 8-168.

Figure 8-168 IP parameters for iSCSI

We can now see that one of our ethernet ports is now configured and online as shown in Figure 8-169. We do the same for the three remaining IP addresses.

564

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-169 Ethernet port successfully configured and online

We do the same for the remaining ports and use a unique IP for each port. When we have done that all our ethernet ports are configured as shown in Figure 8-170 on page 565.

Figure 8-170 All ethernet ports are online

Now both physical ports on each node are configured for iSCSI. We can see the iSCSI identifier (iSCSI name) for our SVC node by selecting Working with nodes from our Welcome window and then select Nodes, under the column iSCSI Name we see our iSCSI identifier as shown in Figure 8-171. Each node has a unique iSCSI name associated with two IP adresses. After the host has initiated the iSCSI connection to a target node, this IQN from the target node should be visible in the iSCSI configuration tool on the host.

Figure 8-171 iSCSI identifier for our nodes

We also have the possibility to enter an iSCSI alias name for our iSCSI name on the node itself as shown in Figure 8-172.

Chapter 8. SVC operations using the GUI

565

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-172 Entering iSCSI alias name

We change the name to something easier to recognize as show in Figure 8-173 on page 566

Figure 8-173 Changing the Alias name

We have now finished configuring iSCSI for our SVC cluster.

566

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.11 Managing Copy Services


See Chapter 6, Advanced Copy Services on page 253 for more information about the functionality of Copy Services in the SVC environment.

8.12 FlashCopy operations using the GUI


When working with FlashCopy using the GUI it is often easier to control as long as you have a small number of mappings to work with. When using many mappings it is considered best to use the CLI to execute your commands.

8.13 Creating a FlashCopy consistency group


To create a FlashCopy consistency group in the SVC GUI, expand Manage Copy Services in the Task pane and select FlashCopy Consistency Groups (Figure 8-174).

Figure 8-174 Select FlashCopy Consistency Groups

Then, from the drop-down menu, select Create a Consistency Group and c lick Go; this can be seen in Figure 8-175.

Figure 8-175 Create consistency group

Enter the desired name, as shown in (Figure 8-176).

Chapter 8. SVC operations using the GUI

567

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-176 Create consistency group

Note: If we choose to use the Automatically Delete Consistency Group When Empty feature, then we can only use this consistency group for mappings that are marked for auto deletion.The non-autodelete consistency group is allowed to contain autodelete FlashCopy mappings and non-autodelete FlashCopy mappings Click Close when the new name has been entered and the result should be shown as in Figure 8-177.

Figure 8-177 View consistency group

Repeat the previous steps to create another FlashCopy consistency group (Figure 8-178 on page 568). The FlashCopy consistency groups are now ready to use.

Figure 8-178 View consistency group

568

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.13.1 Creating a FlashCopy mapping


Here we create the FlashCopy mappings for each of our VDisks for their respective targets. In the SVC GUI, we expand Manage Copy Services in the Task pane and select FlashCopy mappings. When prompted for filtering, we select Bypass Filter. This will show us all the defined FlashCopy mappings, if there were any created previously. As shown in Figure 8-179, we select Create a Mapping from the drop-down menu and click Go, as shown in Figure 8-179, to start the creation process of a FlashCopy mapping.

Figure 8-179 Create FlashCopy mapping

We are then presented with the FlashCopy creation wizard overview of the creation process for a FlashCopy mapping, and we click Next to proceed. We name the first FlashCopy mapping PROD_1, select the previously created consistency group FC_SIGNA, set the background copy priority to 50 and the Grain Size to 64, and click Next to proceed, as shown in Figure 8-180.

Figure 8-180 Define FlashCopy mapping properties

The next step is to select the source VDisk. If there were many source VDisks that were not already defined in a FlashCopy mapping, then we can filter that list here. In Figure 8-181, we define the filter * (which will show us all our VDisks) for the source VDisk and click Next to proceed.

Chapter 8. SVC operations using the GUI

569

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-181 Filter source VDisk candidates

We select Galtarey_01 from the available VDisks, as our source disk and click Next to proceed (Figure 8-182). The next step is to select our target VDisk. The FlashCopy mapping wizard will only present a list of VDisks that are the same size as the source VDisks and not already in a FlashCopy mapping or defined in a Metro Mirror relationship. In Figure 8-182, we select the target Hrappsey_01 and click Next to proceed.

Figure 8-182 Select target VDisk.

In the next step, we select an I/O group for this mapping). Finally, we verify our FlashCopy mapping (Figure 8-183 on page 571) and click Finish to create it.

570

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-183 FlashCopy mapping verification

We check the result of this as shown in Figure 8-184.

Figure 8-184 View FlashCopy mapping

We repeat the procedure to create other FlashCopy mappings on the second FlashCopy target VDisk of Galtarey_01. We give it a different FlashCopy mapping name and choose a different FlashCopy consistency group, as shown in Figure 8-185 on page 572. As you can see in this example, we changed the background copy rate to 30, which will slow down the background copy process. The clearing rate of 60 will extend the stopping process if we had to stop the mapping during a copy process. An incremental mapping copies only parts of the source or target VDisk that have changed since the last FlashCopy process. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process will copy all data from the source to the target VDisk.

Chapter 8. SVC operations using the GUI

571

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-185 Create FlashCopy mapping type of incremental

Note: If no consistency group is defined, the mapping will be a standalone mapping and can be prepared and started without impacting other mappings. All mappings in the same consistency group need to have the same status to maintain the consistency of the group. In Figure 8-186, you can see that Galtarey_01 is still available.

Figure 8-186 Viewing FlashCopy mapping

We select Hrappsey_02 as the destination VDisk, as shown in Figure 8-187 on page 573.

572

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-187 Select a second target VDisk

On the final page of the wizard, as shown in Figure 8-188, we select Finish after verifying all the parameters.

Figure 8-188 Verification of FlashCopy mapping

We confirm the parameter settings by clicking Finish, as shown in Figure 8-188. The background copy rate specifies the priority that should be given to complete the copy. If 0 is specified, the copy will not proceed in the background. A default value is 50. Tip: FlashCopy can be invoked from the SVC GUI, but this might not make much sense if you plan to handle a large number of FlashCopy mappings or consistency groups periodically, or at varying times. In this case, creating a script by using the CLI may be more convenient.

8.13.2 Preparing (pre-triggering) the FlashCopy


When performing the FlashCopy on the VDisks with the database, we want to be able to control the PiT when the FlashCopy is triggered, in order to keep our quiesce time at a minimum and preserve data integrity. We put the VDisks in a consistency group, and then we prepare the consistency group in order to flush the cache for all source VDisks.

Chapter 8. SVC operations using the GUI

573

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

If you only select one mapping to be prepared, the cluster will ask if you want all the volumes in that consistency group to be prepared as show in Figure 8-189.

Figure 8-189 FlashCopy messages

When you have assigned several mappings to a FlashCopy consistency group, you only have to issue a single prepare command for the whole group, to prepare all the mappings at once. We select the FlashCopy consistency group and Prepare a consistency group from the action list and click Go. The status will go to Preparing, and then finally to Prepared. Press the Refresh button several times until it is in the Prepared state. Figure 8-190 shows how we check the result. The status of the consistency group has changed to Prepared.

Figure 8-190 View prepared state of consistency groups

8.13.3 Starting (triggering) FlashCopy mappings


When the FlashCopy mapping enters the Prepared state we can start the copy process. Only mappings that are not a member of an consistency group, or the only mapping in an consistency group, can be started individually. As shown in Figure 8-191, we select the FlashCopy we want to start and select Start a Mapping from the scroll menu, and click Go to proceed.

Figure 8-191 Start a FlashCopy mapping

Because we have already prepared the FlashCopy mapping, we are ready to start the mapping right away. Notice that this mapping is not a member of any consistency group. An 574
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

overview message with information about the mapping we are about to start is shown in Figure 8-192, and we select Start to start the FlashCopy mapping.

Figure 8-192 Starting FlashCopy mapping

After we have selected Start we are automatically shown the copy process view that shows the progress of our copy mappings.

8.13.4 Starting (triggering) a FlashCopy consistency group


As shown in Figure 8-193, the FlashCopy consistency group enters the prepared state. All mappings in this group will be brought to the same state. To start the FlashCopy consistency group, we select the consistency group and select Start a Consistency Group from the scroll menu and click Go.

Figure 8-193 Start the consistency group

In Figure 8-194, we are prompted to confirm starting the FlashCopy consistency group. We now flush the database and OS buffers and quiesce the database, then click OK to start the FlashCopy consistency group. Note: Since we have already prepared the FlashCopy consistency group, this option is grayed out when you are prompted to confirm starting the FlashCopy consistency group.

Figure 8-194 Start consistency group message

Chapter 8. SVC operations using the GUI

575

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

As shown in Figure 8-195 on page 576, we verified that the consistency group is in the Copying state, and subsequently, we resume the database I/O.

Figure 8-195 Consistency group status

8.13.5 Monitoring the FlashCopy progress


To monitor the copy progress, you can click the Refresh or another option is to select Manage Progress, then FlashCopy, and then you can monitor the progress. This is shown in Figure 8-196 on page 576.

Figure 8-196 FlashCopy background copy progress

When the background copy is completed for all FlashCopy mappings in the consistency group, the status is changed to Idle or Copied.

8.13.6 Stopping the FlashCopy consistency group


When a FlashCopy consistency group is stopped, the target VDisks become invalid and are set offline by the SVC. The FlashCopy mapping or consistency group needs to be prepared again or re-triggered to bring the target VDisks online again. Tip: If you want to stop a mapping or group in a multiple target FlashCopy environment, consider whether you want to keep any of the dependent mappings. If not, you should issue the stop command with the force parameter; this will stop all of the dependent maps too and negate the need for the stopping copy process to run.

576

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Note: Stopping a FlashCopy mapping should only be done when the data on the target VDisk is of no use, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC. As shown in Figure 8-197, we stop the FC_DONA consistency group. All mappings belonging to that consistency group are now in the Copying state.

Figure 8-197 Stop FlashCopy consistency group

We select the FlashCopy consistency group FC_DONA and from the drop down menu we select Stop Mapping as shown in Figure 8-198.

Figure 8-198 Stopping the FlashCopy consistency group

When selecting what method to use to stop the mapping we have the three options as shown in Figure 8-199.

Figure 8-199 Stopping FlashCopy consistency group options

Chapter 8. SVC operations using the GUI

577

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Since we want to stop the mapping right away we select the Forced Stop and the status of the FlashCopy consistency groups goes from Copying Stopping Stopped, as shown in Figure 8-200.

Figure 8-200 FlashCopy consistency group status

8.13.7 Deleting the FlashCopy mapping


If we want to delete a FlashCopy mapping we have two options, either the automatic deletion of a mapping or manual deletion. When we initially create a mapping we have the option to insert a check mark as shown in Figure 8-201 where we enable the function that automatically deletes the FlashCopy mapping when the background copy process has finished.

Figure 8-201 Check mark the automatically delete the mapping.

Or if the option has not been selected initially you can delete the mapping manually as shown in Figure 8-202 on page 579.

578

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-202 Manual deletion of an FlashCopy mapping

8.13.8 Deleting the FlashCopy consistency group


If you delete a consistency group with active mappings in it, all mappings in that group will be become stand alone mappings. Tip: If you want to use the target VDisks in a consistency group as normal VDisks, you can monitor the background copy progress until it is complete (100% copied), and then delete the FlashCopy mapping. When deleting a consistency group we start by selecting a group, and from the drop down menu as shown in Figure 8-203.

Figure 8-203 Deletion of an FlashCopy consistency group

We can do this even if the consistency group has a status Copying as shown in Figure 8-204.

Figure 8-204 Deletion of an consistency group with a mapping in copying state

And since there is an active mapping with the state of Copying we will get a warning message as shown in Figure 8-205 on page 580.

Chapter 8. SVC operations using the GUI

579

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-205 Warning messages

8.13.9 Migration between a fully allocated VDisk and Space-efficient VDisk


If you want to migrate from a fully allocated VDisk to a Space-efficient VDisk you follow the same procedure as in 8.13.1, Creating a FlashCopy mapping on page 569 but make sure to select a Space-efficient VDisk that has already been created as your target volume. The same method can be used to migrate from a Space-efficient VDisk to a fully allocated VDisk. Create a FlashCopy mapping with the fully allocated VDisk as the source and the SEV as the target. There are details on how to create an SEV VDisk in 8.5.4, Creating a space-efficient VDisk with auto-expand on page 509. Important: The copy process will overwrite all the data on the target VDisk. You must backup all the data you before you start the copy process.

8.13.10 Reversing and splitting a FlashCopy mappings


Starting with SVC 5.1 we can now perform a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning. In other words we can start a FlashCopy mapping whose target is the source of another FlashCopy mapping. This enables us to reverse the direction of a FlashCopy map, without having to remove existing maps, and without losing the data from the target. When you prepare either a standalone mapping or consistency group you will be prompted with a message as shown in Figure 8-206.

Figure 8-206 FlashCopy restore option

Splitting a cascaded FlashCopy mapping allows the source target of a map which is 100% complete to be removed from the head of the cascade when the map is stopped. 580
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

For example, if we have four VDisks in a cascade (A -> B -> C -> D), and the map A-> B is 100% complete, as shown in Figure 8-207 on page 581, then clicking Split Stop as shown in Figure 8-208 will result in FCMAP_AB becoming idle_copied and the remaining cascade will become B->C->D.

Figure 8-207 Stopping a FlashCopy mapping

Figure 8-208 Selecting the Split Stop option

Without the split option, VDisk A would remain at the head of the cascade (A -> C -> D). Consider this sequence of steps: User takes a backup using the mapping A-> B. A is the production VDisk; B is a backup. At some later point the user suffers corruption on A, and so reverses the mapping B -> A. The user then takes another backup from the production disk A, and so has the cascade B-> A -> C. Stopping A-> B without using the Split Stop option will result in the cascade B->C. Note that the backup disk B is now at the head of this cascade. When the user next wants to take a backup to B, they can still start mapping A->B (using the -restore flag), but they cannot then reverse the mapping to A (B->A or C-> A). Stopping A-> B with Split Stop would have resulted in the cascade A -> C. This does not result in the same problem because the production disk A is at the head of the cascade instead of the backup disk B.

Chapter 8. SVC operations using the GUI

581

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.14 Metro Mirror operations


Next, we show how to set up Metro Mirror using the GUI. Note: This example is for intercluster only, so if you want to set up intracluster, we highlight those parts of the following procedure that you do not need to perform.

8.14.1 Cluster partnership


Starting with SVC 5.1 we now have the opportunity to create more than a one-to-one cluster partnership. It is now possible to have a cluster partnership between many SVC clusters which allows us to create four different configurations using a maximum of four clusters connected: Star Configuration as shown in Figure 8-209.

Figure 8-209 Star Configuration

Triangle Configuration as shown in Figure 8-210.

Figure 8-210 Triangle Configuration

582

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Fully-Connected Configuration as shown in Figure 8-211 on page 583.

Figure 8-211 Fully-Connected configuration

Daisy-Chaining Configuration as shown in Figure 8-212.

Figure 8-212 Daisy-chaining configuration

Important: All SVC clusters must be at level 5.1 or above. In the following scenario, we will be setting up an intercluster Metro Mirror relationship between the SVC cluster ITSO-CLS1 at the primary site and the SVC cluster ITSO-CLS2 at the secondary site. Details of the VDisks are shown in Table 8-3.
Table 8-3 VDisk details Content of VDisk Database Files Database Log Files Application Files VDisks at primary site MM_DB_Pri MM_DBLog_Pri MM_App_Pri VDisks at secondary site MM_DB_Sec MM_DBLog_Sec MM_App_Sec

Since data consistency is needed across VDisks MM_DB_Pri and MM_DBLog_Pri, a consistency group CG_WIN2K3_MM is created to handle the Metro Mirror relationships for them. While, in this scenario, application files are independent of the database, a stand-alone Metro Mirror relationship is created for VDisk MM_App_Pri. The Metro Mirror setup is illustrated in Figure 8-213 on page 584.

Chapter 8. SVC operations using the GUI

583

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Primary Site SVC Cluster - ITSO - CLS1

Secondary Site SVC Cluster - ITSO - CLS2

Consistency Group
CG_W2K3_MM

MM_DB_Pri

MM Relationship 1

MM_DB_Sec

MM_DBlog_Pri

MM Relationship 2

MM_DBlog_Sec

MM_App_Pri

MM Relationship 3

MM_App_Sec

Figure 8-213 Metro Mirror scenario

8.14.2 Setting up Metro Mirror


In the following section, we assume that the source and target VDisks have already been created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate. To set up the Metro Mirror, the following steps must be performed: 1. Create SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters. 2. Create a Metro Mirror consistency group: Name CG_W2K3_MM 3. Create the Metro Mirror relationship for MM_DB_Pri: Master MM_DB_Pri Auxiliary MM_DB_Sec Auxiliary SVC cluster ITSO-CLS2 Name MMREL1 Consistency group CG_W2K3_MM 4. Create the Metro Mirror relationship for MM_DBLog_Pri: Master MM_DBLog_Pri Auxiliary MM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS2 Name MMREL2 Consistency group CG_W2K3_MM 5. Create the Metro Mirror relationship for MM_App_Pri: Master MM_App_Pri Auxiliary MM_App_Sec Auxiliary SVC cluster ITSO-CLS2 Name MMREL3 584
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.14.3 Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2


We perform this operation on both clusters. Note: If you are creating an intracluster Metro Mirror, do not perform the next step, Creating Cluster Partnership; instead go to Creating a Metro Mirror consistency group on page 587. To create a Metro Mirror partnership between the SVC clusters using the GUI, we launch the SVC GUI for ITSO-CLS1. Then we select Manage Copy Services from the Welcome screen and click Metro & Mirror Cluster Partnership and the window opens as shown in Figure 8-214.

Figure 8-214 Create a Cluster Partnership

After we have selected GO for the creation of an cluster partnership as shown in Figure 8-214 the SVC cluster will now show us the options available to us to select a partner cluster as shown in Figure 8-215 on page 586. We have multiple clusters to choose from the cluster candidates list. In our scenario we will use the one called TSO-CLS2. Select ITSO-CLS2 and specify the available bandwidth for the background copy, in this case 50 MBps and then click OK. There are two options available during creation: Inter-Cluster Delay Simulation, which simulates the Global Mirror round trip delay between the two clusters, in milliseconds. The default is 0, and the valid range is 0 to 100 milliseconds. Intra-Cluster Delay Simulation, which simulates the Global Mirror round trip delay in milliseconds. The default is 0, and the valid range is 0 to 100 milliseconds.

Chapter 8. SVC operations using the GUI

585

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-215 Showing available cluster candidates

As shown in Figure 8-216 we can see that our partnership is Partially configured since we have only performed the work on one side of the partnership.

Figure 8-216 Viewing cluster partnerships

We can see that our newly created Metro Mirror cluster partnership is shown as Partially Configured and to fully configure the Metro Mirror cluster partnership, we must carry out the same steps on ITSO-CLS2 as we did on ITSO-CLS1. For simplicity and brevity, only two most significant windows are shown when the partnership is fully configured. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Metro Mirror cluster partnership and specify the available bandwidth for the background copy, again 50 MBps, and then click OK, as shown in Figure 8-217 on page 587.

586

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-217 We select the cluster partner for the secondary partner

Now that both sides of the SVC Cluster Partnership are defined, the resulting window shown in Figure 8-218 on page 587 confirms that our Metro Mirror cluster partnership is Fully Configured.

Figure 8-218 Fully configured Cluster partnership

The GUI for ITSO-CLS2 is no longer needed. Close this and use the GUI for cluster ITSO-CLS1 for all further steps.

8.14.4 Creating a Metro Mirror consistency group


To create the consistency group to be used for the Metro Mirror relationships of VDisks with database and database log files, we select Manage Copy Services and click Metro Mirror Consistency Groups, from our Welcome screen. To start the creation process, select Create Consistency Group from drop-down menu and click Go, as shown in Figure 8-219.

Chapter 8. SVC operations using the GUI

587

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-219 Create a consistency group

We are presented with the wizard that helps create the Metro Mirror consistency group. The first step in the wizard gives an introduction to the steps involved in the creation of a Metro Mirror consistency group, as shown in Figure 8-220. Click Next to proceed.

Figure 8-220 Introduction to Metro Mirror consistency group creation wizard

As shown in Figure 8-221, specify the name for the consistency group and select your remote cluster, that is already defined in 8.14.3, Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2 on page 585. If we are using this consistency group for internal mirroring, that is within the same cluster, we select intra-cluster consistency group.In our scenario, we select intercluster with our remote cluster ITSO_CLS2 and click Next.

Figure 8-221 Specifying consistency group name and type

In Figure 8-222 on page 589 we can see the Metro Mirror relationships already created that could be included in our Metro Mirror consistency group. Since we do not have any existing relationships at this point to be included in the Metro Mirror consistency group, we will create a blank group by clicking Next to proceed. 588
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-222 Empty list

Verify the setting for the consistency group and click Finish to create the Metro Mirror consistency group, as shown in Figure 8-223 on page 589.

Figure 8-223 Verify settings for Metro Mirror consistency group

After creation of the consistency group, the GUI returns to the Viewing Metro & Global Mirror Consistency Groups window, as shown in Figure 8-224. This page lists the newly created consistency group and notice that it is empty since no relationships have been added to the group.

Figure 8-224 Viewing the newly created consistency group

Chapter 8. SVC operations using the GUI

589

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri


To create the Metro Mirror relationships for VDisks MM_DB_Pri and MM_DBLog_Pri, select Manage Copy Services and click Metro Mirror Cluster Relationships from the SVC Welcome screen. To start the creation process, select Create a Relationship from the drop-down menu and click Go, as shown in Figure 8-225.

Figure 8-225 Create a relationship

We are presented with the wizard that will help us create the Metro Mirror relationship. The first step in wizard gives an introduction to the steps involved in the creation of the Metro Mirror relationship, as shown in Figure 8-226. Click Next to proceed.

Figure 8-226 Introduction to the Metro Mirror relationship creation wizard

As shown in Figure 8-227 on page 591, we name the first Metro Mirror relationship MMREL1 and the type of cluster relationship (in this case intercluster) as per the scenario shown in Figure 8-213 on page 584. It also gives us the option to select the type of copy service, which in our case is Metro Mirror.

590

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-227 Naming the Metro Mirror relationship and selecting the type of cluster relationship

The next step will enable us to select a master VDisk. As this list could potentially be large, the Filtering Master VDisks Candidates window appears, which will enable us to reduce the list of eligible VDisks based on a defined filter. In Figure 8-228, use the filter for * (to list all VDisks) and click Next. Tip: In our scenario, we use MM* as a filter to avoid listing all the VDisks.

Figure 8-228 Define filter for VDisk candidates

As shown in Figure 8-229, we select MM_DB_Pri to be a master VDisk for this relationship, and click Next to proceed.

Chapter 8. SVC operations using the GUI

591

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-229 Selecting the master VDisk

The next step requires us to select an auxiliary VDisk. The Metro Mirror relationship wizard will automatically filter this list, so that only eligible VDisks are shown. Eligible VDisks are those that have the same size as the master VDisk and are not already part of a Metro Mirror relationship. As shown in Figure 8-230, we select MM_DB_Sec as the auxiliary VDisk for this relationship, and click Next to proceed.

Figure 8-230 Selecting the auxiliary VDisk

As shown in Figure 8-231 on page 592, we select the consistency group that we created and now our relationship will be immediately added to that group. Click Next to proceed.

Figure 8-231 Selecting relationship to be a part of consistency group

592

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Finally, in Figure 8-232, we verify the attributes for our Metro Mirror relationship and click Finish to create it.

Figure 8-232 Verifying the Metro Mirror relationship

Once the relationship is successfully created, we are returned to the Metro Mirror relationship list. After successful creation of the relationship, the GUI returns to the Viewing Metro & Global Mirror Relationships window, as shown in Figure 8-233. This window lists the newly created relationship and please notice that we have not started the copy process, we have only established the connections between those two VDisks.

Figure 8-233 Viewing the Metro Mirror relationship

By following a similar process, we create the second Metro Mirror relationship MMREL2, which is shown in Figure 8-234.

Figure 8-234 Viewing the second Metro Mirror relationship MMREL2

Chapter 8. SVC operations using the GUI

593

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri


To create stand-alone Metro Mirror relationship, we start the creation process by selecting Create a Relationship from the scroll menu and click Go. Next, we are presented with the wizard that shows the steps involved in the process of creating a consistency group, and we click Next to proceed. As shown in Figure 8-235, we name the relationship (MMREL3), specify that it is an intercluster relationship with ITSO-CLS2, and click Next.

Figure 8-235 Specifying the Metro Mirror relationship name and auxiliary cluster

As shown in Figure 8-236 on page 594, we are prompted for a filter prior to presenting the master VDisk candidates. We select the MM* filter and click Next.

Figure 8-236 Filtering VDisk candidates

As shown in Figure 8-237 on page 595, we select MM_App_Pri to be the master VDisk of the relationship, and click Next to proceed.

594

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-237 Selecting the master VDisk

As shown in Figure 8-238, we select MM_APP_Sec as the auxiliary VDisk of the relationship, and click Next to proceed.

Figure 8-238 Selecting the auxiliary VDisk

As shown in Figure 8-239, we will not select a consistency group, since we are creating a stand-alone Metro Mirror relationship.

Figure 8-239 Selecting options for the Metro Mirror relationship

Note: To add a Metro Mirror relationship to a consistency group, it must be in the same state as the consistency group.

Chapter 8. SVC operations using the GUI

595

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

As show in Figure 8-240 we cant select an consistency group since we selected our relationship as synchronized and that is not in the same state as the consistency group we created earlier.

Figure 8-240 The consistency group must have the same state as the relationship

Finally, Figure 8-241 shows the actions that will be performed. We click Finish to create this new relationship.

Figure 8-241 Verifying the Metro Mirror relationship

After successful creation, we are returned to the Metro Mirror relationship window. Figure 8-242 now shows all our defined Metro Mirror relationships.

Figure 8-242 Viewing Metro Mirror relationships

596

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.14.7 Starting Metro Mirror


Now that we have created the Metro Mirror consistency group and relationships, we are ready to use Metro Mirror relationships in our environment. When performing Metro Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy if a failure occurs that affects the SAN at the production site. In the following section, we show how to stop and start a stand-alone Metro Mirror relationship and consistency group.

8.14.8 Starting a stand-alone Metro Mirror relationship


In Figure 8-243, we select the stand-alone Metro Mirror relationship MMREL3, and from the drop-down menu, we select Start Copy Process and click Go.

Figure 8-243 Starting a stand-alone Metro Mirror relationship

In Figure 8-244, we do not need to change Forced start, Mark as clean, or Copy direction parameters, as this is the first time we are invoking this Metro Mirror relationship (and we defined the relationship as being already synchronized). We click OK to start the stand-alone Metro Mirror relationship MMREL3.

Figure 8-244 Selecting options and starting the copy process

Because the Metro Mirror relationship was in the Consistent stopped state and no updates have been made to the primary VDisk, the relationship quickly enters the Consistent synchronized state, as shown in Figure 8-246 on page 598.

Chapter 8. SVC operations using the GUI

597

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-245 Viewing Metro Mirror relationships

8.14.9 Starting a Metro Mirror consistency group


To start the Metro Mirror consistency group CG_W2K3_MM, we select Manage Copy Services and Metro Mirror Consistency Groups, from our SVC Welcome screen. In Figure 8-246, we select the Metro Mirror consistency group CG_W2K3_MM, and from the drop-down menu, we select Start Copy Process and click Go.

Figure 8-246 Starting copy process for the consistency group

As shown in Figure 8-247, we click OK to start the copy process. We cannot select the Forced start, Mark as clean, or Copy Direction options, as our consistency group is currently in the Inconsistent stopped state.

Figure 8-247 Selecting options and starting the copy process

As shown in Figure 8-248, we are returned to the Metro Mirror consistency group list and the consistency group CG_W2K3_MM has changed to the Inconsistent copying state.

598

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-248 Viewing Metro Mirror consistency groups

Since the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all the relationships in the consistency group. Upon completion of the background copy for all the relationships in the consistency group, it enters the Consistent synchronized state.

8.14.10 Monitoring background copy progress


The status of the background copy progress can either be shown in the Viewing Metro Mirror Relationships window, last column, or by opening it under the My Work, Manage progress view, and clicking View progress. This will allow you to view the Metro Mirror progress, as shown in Figure 8-249.

Figure 8-249 Viewing background copy progress for Metro Mirror relationships

Note: Setting up SNMP traps for the SVC enables automatic notification when the Metro Mirror consistency group or relationships change state.

8.14.11 Stopping and restarting Metro Mirror


Now that the Metro Mirror consistency group and relationships are running, in this and the following sections, we describe how to stop, restart, and change the direction of the stand-alone Metro Mirror relationships, as well as the consistency group. In this section, we show how to stop and restart the stand-alone Metro Mirror relationships and the consistency group.

Chapter 8. SVC operations using the GUI

599

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.14.12 Stopping a stand-alone Metro Mirror relationship


To stop a Metro Mirror relationship, while enabling access (write I/O) to both the primary and secondary VDisk, we select the relationship and select Stop Copy Process from the drop-down menu and click Go, as shown in Figure 8-250.

Figure 8-250 Stopping a stand-alone Metro MIrror relationship

As shown in Figure 8-251, we check the Enable write access... option and click OK to stop the Metro Mirror relationship.

Figure 8-251 Enable access to the secondary VDisk while stopping relationship

As shown in Figure 8-252, the Metro Mirror relationship transits to the Idling state when stopped while enabling access to the secondary VDisk.

Figure 8-252 Viewing the Metro Mirror relationships

8.14.13 Stopping a Metro Mirror consistency group


As shown in Figure 8-253, we select the Metro Mirror consistency group and Stop Copy Process from the drop-down menu and click Go.

600

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-253 Selecting the Metro Mirror consistency group to be stopped

As shown in Figure 8-254, we click OK without specifying Enable write access... to the secondary VDisks.

Figure 8-254 Stopping consistency group without enabling access to secondary VDisks

As shown in Figure 8-255, the consistency group enters the Consistent stopped state, when stopped without enabling access to the secondary.

Figure 8-255 Viewing Metro Mirror consistency groups

Afterwards, if we want to enable write access (write I/O) to the secondary VDisks, we can reissue the Stop Copy Process and this time specify that we want to enable write access to the secondary VDisks. In Figure 8-256, we select the Metro Mirror relationship and select Stop Copy Process from drop-down menu and click Go.

Chapter 8. SVC operations using the GUI

601

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-256 Stopping the Metro Mirror consistency group

As shown in Figure 8-257, we check the Enable write access... check box and click OK.

Figure 8-257 Enabling access to secondary VDisks

When applying the Enable write access... option, the consistency group transits to the Idling state, as shown in Figure 8-258.

Figure 8-258 Viewing Metro Mirror consistency group in the Idling state

8.14.14 Restarting a Metro Mirror relationship in the Idling state


When restarting a Metro Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or auxiliary VDisks in the Metro Mirror relationship, then consistency will have been compromised. In this situation, we must check the Force option to start the copy process, otherwise the command will fail. As shown in Figure 8-259, we select the Metro Mirror relationship and Start Copy Process from the drop-down menu and click Go.

602

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-259 Starting a stand-alone Metro Mirror relationship in the Idling state

As shown in Figure 8-260, we check the Force option, since write I/O has been performed while in the Idling state, and we select the copy direction by defining the master VDisk as the primary, and click OK.

Figure 8-260 Specifying options while starting copy process

The Metro Mirror relationship enters the Consistent copying and when background copy is complete, the relationship transits to the Consistent synchronized state, as shown in Figure 8-261.

Figure 8-261 Viewing Metro Mirror relationship

8.14.15 Restarting a Metro Mirror consistency group in the Idling state


When restarting a Metro Mirror consistency group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or auxiliary VDisk in any of the Metro Mirror relationships in the consistency group, then consistency will have been
Chapter 8. SVC operations using the GUI

603

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

compromised. In this situation, we must check the Force option to start the copy process, otherwise the command will fail. As shown in Figure 8-262, we select the Metro Mirror consistency group and Start Copy Process from the drop-down menu and click Go.

Figure 8-262 Starting the copy process for the consistency group

As shown in Figure 8-263, we check the Force option and set the copy direction by selecting the primary as the master.

Figure 8-263 Specifying the options while starting the copy process in the consistency group

When the background copy completes, the Metro Mirror consistency group enters the Consistent synchronized state shown in Figure 8-264.

Figure 8-264 Viewing Metro Mirror consistency groups

8.14.16 Changing copy direction for Metro Mirror


In this section, we show how to change the copy direction of the stand-alone Metro Mirror relationships and the consistency group.

604

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.14.17 Switching copy direction for a Metro Mirror consistency group


When a Metro Mirror consistency group is in the Consistent synchronized state, we can change the copy direction for the Metro Mirror consistency group. In Figure 8-265, we select the consistency group CG_W2K3_MM and Switch Copy Direction from the drop-down menu and click Go. Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisks that will change from primary to secondary, since all I/O will be inhibited when the Vdisks become secondary. Therefore, careful planning is required prior to switching the copy direction.

Figure 8-265 Selecting the consistency group for which the copy direction is to be changed

In Figure 8-266, we see that the currently primary VDisks are the master. So, to change the copy direction for the Metro Mirror consistency group, we specify the auxiliary VDisks to become the primary, and click OK.

Figure 8-266 Selecting primary VDisk, as auxiliary, to switch the copy direction

The copy direction is now switched and we are returned to the Metro Mirror consistency group list, where we see that the copy direction has been switched, as shown in Figure 8-267.

Chapter 8. SVC operations using the GUI

605

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-267 Viewing Metro Mirror consistency group after changing the copy direction

In Figure 8-268, we show the new copy direction for individual relationships within that consistency group.

Figure 8-268 Viewing Metro Mirror relationship after changing the copy direction

8.14.18 Switching the copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the Consistent synchronized state, we can change the copy direction for relationship. In Figure 8-269 on page 606, we select the relationship MMREL3 and Switch Copy Direction from the drop-down menu and click Go. Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transits from primary to secondary, since all I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Metro Mirror relationship.

Figure 8-269 Selecting relationship whose copy direction needs to be changed

606

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

In Figure 8-270, we see that the current primary VDisk is the master, so to change the copy direction for the stand-alone Metro Mirror relationship, we specify the auxiliary VDisk to be the primary, and click OK.

Figure 8-270 Selecting primary VDisk, as auxiliary, to switch copy direction

The copy direction is now switched and we are returned to the Metro Mirror relationship list, where we see that the copy direction has been switched and the auxiliary VDisk has become the primary, as shown in Figure 8-271.

Figure 8-271 Viewing Metro Mirror relationships

Chapter 8. SVC operations using the GUI

607

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.15 Global Mirror operations


Next, we show how to set up Global Mirror. Note: This example is for intercluster only. In case you wish to set up intracluster, we highlight those parts of the following procedure that you do not need to perform. Starting with 5.1 we can install multiple clusters in a partnership. This is shown in Cluster partnership on page 582 but in the following scenario, we will be setting up an intercluster Global Mirror relationship between the SVC cluster ITSO-CLS1 at primary site and SVC cluster ITSO-CLS2 at the secondary site. Details of the VDisks are shown in Table 8-4.
Table 8-4 Details of VDisks for Global Mirror relationship Content of VDisk Database Files Database Log Files Application Files VDisks at primary site GM_DB_Pri GM_DBLog_Pri GM_App_Pri VDisks at secondary site GM_DB_Sec GM_DBLog_Sec GM_App_Sec

Since data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a consistency group to handle Global Mirror relationships for them. While in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. The Global Mirror setup is illustrated in Figure 8-272 on page 608.

Primary Site
SVC Cluster ITSO-CLS1

Secondary Site
SVC Cluster ITSO-CLS2

Consistency Group
CG_W2K3_MM

GM_DB_Pri

GM_Relationship 1

GM_DB_Sec

GM_DBLog_Pri

GM_Relationship 2

GM_DBLog_Sec

GM_App_Pri

GM_Relationship 3

GM_App_Sec

Figure 8-272 Global Mirror scenario using the GUI

608

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.15.1 Setting up Global Mirror


In the following section, we assume that the source and target VDisks have already been created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate. To set up the Global Mirror, you must perform the following steps: 1. Create SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters: Bandwidth 10 MBps 2. Create a Global Mirror consistency group: Name CG_W2K3_GM 3. Create the Global Mirror relationship for GM_DB_Pri. Master GM_DB_Pri Auxiliary GM_DB_Sec Auxiliary SVC cluster ITSO-CLS2 Name GMREL1 Consistency group CG_W2K3_GM 4. Create the Global Mirror relationship for GM_DBLog_Pri: Master GM_DBLog_Pri Auxiliary GM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS2 Name GMREL2 Consistency group CG_W2K3_GM 5. Create the Global Mirror relationship for GM_App_Pri: Master GM_App_Pri Auxiliary GM_App_Sec Auxiliary SVC cluster ITSO-CLS2 Name GMREL3

8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2


In this section, we create the SVC partnership on both clusters. Note: If you are creating an intracluster Global Mirror, do not perform the next step; instead, go to Creating a Global Mirror consistency group on page 614. To create a Global Mirror partnership between the SVC clusters using the GUI, we launch the SVC GUI for ITSO-CLS1. Then we select Manage Copy Services and click Metro & Global Mirror Cluster Partnership, as shown in Figure 8-273.

Figure 8-273 Selecting Global Mirror Cluster Partnership on ITSO-CLS1

Chapter 8. SVC operations using the GUI

609

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-274 shows the cluster partnership defined for this cluster. Since there is no existing partnership, nothing can be listed. It also gives a warning stating that for any type of copy relationship between VDisks across two different clusters, the partnership must exist between them. Notice that we have already another partnership running. Select GO to continue creating your partnership.

Figure 8-274 Creating a new partnership

In Figure 8-275 the available SVC cluster candidates are listed, which in our case is ITSO-CLS4. We select ITSO-CLS4 and specify the available bandwidth for the background copy; in this case, we select 10 MBps and then click OK.

Figure 8-275 Selecting SVC cluster partner and specifying bandwidth for background copy

In the resulting window, shown in Figure 8-276, the newly created Global Mirror cluster partnership is shown as Partially Configured.

610

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-276 Viewing the newly created Global Mirror partnership

To fully configure the Global Mirror cluster partnership, we must carry out the same steps on ITSO-CLS4 as we did on ITSO-CLS1. For simplicity, in the following figures, only the last two windows are shown. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Global Mirror cluster partnership and specify the available bandwidth for background copy, again 10 MBps, and then click OK, as shown in Figure 8-277.

Figure 8-277 Selecting SVC cluster partner and specifying bandwidth for background copy

Now that both sides of the SVC Cluster Partnership are defined, the window shown in Figure 8-278 confirms that our Global Mirror cluster partnership is Fully Configured.

Chapter 8. SVC operations using the GUI

611

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-278 Global Mirror cluster partnership is fully configured

Note: Link tolerance, intercluster delay simulation, and intracluster delay simulation are introduced with the use of the Global Mirror feature.

8.15.3 Global Mirror link tolerance and delay simulations


We overview link tolerance and delay simulations.

Global Mirror link tolerance


The gm_link_tolerance parameter defines how sensitive the SVC is to interlinking overload conditions. The value is the number of seconds of continuous link difficulties that will be tolerated before the SVC will stop the remote copy relationships in order to prevent impacting host I/O at the primary site. In order to change the value, refer to Changing link tolerance and delay simulation values for Global Mirror on page 613. The link tolerance values are between 60 and 86400 seconds in increments of 10 seconds. The default value for the link tolerance is 300 seconds. Recommendation: We strongly recommend using the default value. If the link is overloaded for a period that would impact host I/O at the primary site, the relationships will be stopped to protect those hosts.

Global Mirror intercluster and intracluster delay simulation


This Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This feature allows testing to be performed that detects colliding writes and so can be used to test an application before full deployment of the Global Mirror feature. The delay simulation can be enabled separately for either intracluster or intercluster Global Mirror. To enable and change to the appropriate value, refer to Changing link tolerance and delay simulation values for Global Mirror on page 613. inter_cluster_delay_simulation and intra_cluster_delay_simulation express the amount of time secondary I/Os are delayed respectively for intercluster and intracluster relationships. These values specifies the number of milliseconds that I/O activity, that is, copying the

612

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

primary VDisk to a secondary VDisk, is delayed. A value from 0 to 100 milliseconds in 1 millisecond increments can be set and a value of zero disables this feature. To check the current settings for the delay simulation, refer to Changing link tolerance and delay simulation values for Global Mirror on page 613.

Changing link tolerance and delay simulation values for Global Mirror
Here, we show the modification of the delay simulations and the Global Mirror link tolerance values. We also show the changed values for the Global Mirror link tolerance and delay simulation parameters. Launching the SVC GUI for ITSO-CLS1, we select the Global Mirror Cluster Partnership option to view and to modify the parameters, as shown in Figure 8-279 and Figure 8-280 on page 613, respectively.

Figure 8-279 View and modify Global Mirror link tolerance and delay simulation parameters

Figure 8-280 Set Global Mirror link tolerance and delay simulations parameters

After performing the steps, the GUI returns to the Global Mirror Partnership window and lists the new parameter settings, as shown in Figure 8-281.

Chapter 8. SVC operations using the GUI

613

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-281 View modified parameters

8.15.4 Creating a Global Mirror consistency group


To create the consistency group to be used by the Global Mirror relationships for the VDisks with the database and database log files, we select Manage Copy Services and click Global Mirror Consistency Groups, as shown in Figure 8-282.

Figure 8-282 Selecting Global Mirror consistency groups

To start the creation process, we select Create Consistency Group from the drop-down menu and click Go, as shown in Figure 8-283. We see that in our list we already have one Metro Mirror consistency group created between ITSO-CLS1 and ITSO-CLS2 but we are now creating a new Global Mirror Consistency group.

614

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-283 Create a consistency group

We are presented with a wizard that helps us create the Global Mirror consistency group. The first step in this wizard gives an introduction to the steps involved in the creation of the Global Mirror consistency group, as shown in Figure 8-284. Click Next to proceed.

Figure 8-284 Introduction to Global Mirror consistency group creation wizard

As shown in Figure 8-285, we specify the consistency group name and whether it is to be used for intercluster or intracluster relationships. In our scenario, we select Create an inter-cluster consistency group and the we need to select our cluster partner. In Figure 8-285we can se that we can select between ITSO-CLS2 and ITSO-CLS4 and since ITSO-CLS4 is our Global Mirror partner we select it and click Next.

Figure 8-285 Specifying consistency group name and type

Figure 8-286 would show any existing Global Mirror relationships that could be included in the Global Mirror consistency group. As we do not have any existing Global Mirror relationships at

Chapter 8. SVC operations using the GUI

615

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

this time, we will create an empty group by clicking Next to proceed as shown in Figure 8-286.

Figure 8-286 Select the existing Global Mirror relationship

Verify the settings for the consistency group and click Finish to create the Global Mirror Consistency Group, as shown in Figure 8-287.

Figure 8-287 Verifying the settings for the Global Mirror consistency group

When the Global Mirror consistency group is created, we are returned to the Viewing Global Mirror Consistency Groups window. It shows our newly created Global Mirror consistency group, as shown in Figure 8-288.

Figure 8-288 Viewing Global Mirror consistency groups

616

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri


To create the Global Mirror Relationships for GM_DB_Pri and GM_DBLog_Pri, we select Manage Copy Services and click Global Mirror Cluster Relationships, from the Welcome screen. To start the creation process, we select Create a Relationship from the drop-down menu and click Go, as shown in Figure 8-289.

Figure 8-289 Create a relationship

We are presented with a wizard that helps us create Global Mirror relationships. The first step in the wizard gives an introduction to the steps involved in the creation of the Global Mirror relationship, as shown in Figure 8-290. Click Next to proceed.

Figure 8-290 Introduction to Global Mirror relationship creation wizard

As shown in Figure 8-291, we name our first Global Mirror relationship GMREL1, click Global Mirror Relationship, and select the relationship for the cluster. In this case, it is an intercluster relationship towards ITSO-CLS4, as shown in Figure 8-272 on page 608.

Chapter 8. SVC operations using the GUI

617

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-291 Naming the Global Mirror relationship and selecting the type of the cluster relationship

The next step will enable us to select a master VDisk. As this list could potentially be large, the Filtering Master VDisks Candidates window appears, which will enable us to reduce the list of eligible VDisks based on a defined filter. In Figure 8-292, use the filter for GM* (use * to list all VDisks) and click Next.

Figure 8-292 Defining the filter for master VDisk candidates

As shown in Figure 8-293, we select GM_DB_Pri to be the master VDisk of the relationship, and click Next to proceed.

Figure 8-293 Selecting the master VDisk

618

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

The next step will require us to select an auxiliary VDisk. The Global Mirror relationship wizard will automatically filter this list so that only eligible VDisks are shown. Eligible VDisks are those that have the same size as the master VDisk and are not already part of a Global Mirror relationship. As shown in Figure 8-294, we select GM_DB_Sec as the auxiliary VDisk for this relationship, and click Next to proceed.

Figure 8-294 Selecting the auxiliary VDisk

As shown in Figure 8-295, select the relationship to be part of the consistency group that we have created and click Next to proceed.

Figure 8-295 Selecting the relationship to be part of a consistency group

Note: It is not mandatory to make the relationship part of a consistency group at this stage. It can be also be done after the creation of the relationship at a later stage. The relationship can be added to the consistency group by modifying that relationship. Finally, in Figure 8-296, we verify the Global Mirror Relationship attributes and click Finish to create it.

Chapter 8. SVC operations using the GUI

619

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-296 Verifying the Global Mirror relationship

After successful creation of the relationship, the GUI returns to the Viewing Global Mirror Relationships window, as shown in Figure 8-297. This window will list the newly created relationship. Using the same process, the second Global Mirror relationship, GMREL2, is also created. Both relationships are shown in Figure 8-297.

Figure 8-297 Viewing Global Mirror relationships

8.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri


To create the stand-alone Global Mirror relationship, we start the creation process by selecting Create a Relationship from the drop-down menu and click Go, as shown in Figure 8-298.

Figure 8-298 Create a Global Mirror relationship

Next, we are presented with the wizard that shows the steps involved in the process of creating a Global Mirror relationship, as shown in Figure 8-299. Click Next to proceed.

620

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-299 Introduction to Global Mirror relationship creation wizard

In Figure 8-300, we name the Global Mirror relationship GMREL3, specify that it is an intercluster relationship, and click Next.

Figure 8-300 Naming the Global Mirror relationship and selecting the type of cluster relationship

As shown in Figure 8-301 on page 621, we are prompted for a filter prior to presenting the master VDisk candidates. We use * to list all candidates and click Next.

Figure 8-301 Filtering master VDisk candidates

Chapter 8. SVC operations using the GUI

621

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

As shown in Figure 8-302, we select GM_App_Pri to be the master VDisk for the relationship, and click Next to proceed.

Figure 8-302 Selecting the master VDisk

As shown in Figure 8-303, we select GM_App_Sec as the auxiliary VDisk for the relationship, and click Next to proceed.

Figure 8-303 Selecting auxiliary VDisk

As shown in Figure 8-304, we did not select a consistency group, as we are creating a stand-alone Global Mirror relationship.

622

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-304 Selecting options for the Global Mirror relationship

We also specify that the master and auxiliary VDisk are already synchronized; for the purpose of this example, we can assume that they are pristine. This is shown in Figure 8-305 on page 623.

Figure 8-305 Selecting the synchronized option for Global Mirror relationship

Note: To add a Global Mirror relationship to a consistency group, it must be in the same state as the consistency group. Even if we intend to make the Global Mirror relationship GMREL3 part of the consistency group CG_W2K3_GM, we are not offered the option, as shown in Figure 8-305. This is because the state of the relationship GMREL3 is Consistent Stopped, because we selected the synchronized option. The state of the consistency group CG_W2K3_GM is currently Inconsistent Stopped. Finally, Figure 8-306 shows the actions that will be performed. We click Finish to create this new relationship.

Chapter 8. SVC operations using the GUI

623

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-306 Verifying Global Mirror relationship

After successful creation, we are returned to the Viewing Global Mirror Relationship window. Figure 8-307 now shows all our defined Global Mirror relationships.

Figure 8-307 Viewing Global Mirror relationships

8.15.7 Starting Global Mirror


Now that we have created the Global Mirror consistency group and relationships, we are ready to use the Global Mirror relationships in our environment. When performing Global Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy in case a hardware failure occurs that affects the SAN at the production site. In this section, we show how to start the stand-alone Global Mirror relationship and the consistency group.

8.15.8 Starting a stand-alone Global Mirror relationship


In Figure 8-308, we select the stand-alone Global Mirror relationship GMREL3, and from the drop-down menu, we select Start Copy Process and click Go.

624

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-308 Starting the stand-alone Global Mirror relationship

In Figure 8-309, we do not need to change the parameters Forced start, Mark as clean, or Copy Direction, as this is the first time we are invoking this Global Mirror relationship (and we already defined the relationship as being synchronized in Figure 8-305 on page 623). We click OK to start the stand-alone Global Mirror relationship GMREL3.

Figure 8-309 Selecting options and starting the copy process

Since the Global Mirror relationship was in the Consistent Stopped state and no updates have been made on the primary VDisk, the relationship quickly enters the Consistent Synchronized state, as shown in Figure 8-310.

Figure 8-310 Viewing Global Mirror relationship

8.15.9 Starting a Global Mirror consistency group


To start the Global Mirror consistency group CG_W2K3_GM, select Global Mirror Consistency Groups from the SVC Welcome screen. In Figure 8-311, we select the Global Mirror consistency group CG_W2K3_GM, and from the drop-down menu, we select Start Copy Process and click Go.

Chapter 8. SVC operations using the GUI

625

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-311 Selecting Global Mirror consistency group and starting the copy process

As shown in Figure 8-312, we click OK to start the copy process. We cannot select the options Forced start, Mark as clean, or Copy Direction, as this is the first time we are invoking this Global Mirror relationship.

Figure 8-312 Selecting options and starting the copy process

As shown in Figure 8-313, we are returned to the Viewing Global Mirror Consistency Groups window and the consistency group CG_W2K3_GM has changed to the Inconsistent copying state. Since the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all relationships in the consistency group. Upon completion of the background copy for all relationships in the consistency group, it enters the Consistent Synchronized state.

Figure 8-313 Viewing Global Mirror consistency groups

8.15.10 Monitoring background copy progress


The status of the background copy progress can be seen in the Viewing Global Mirror Relationships window, as shown in Figure 8-314, or alternatively, use the Manage Progress section under My Work and select Viewing Global Mirror Progress, as shown in Figure 8-315.

626

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-314 Monitoring background copy process for Global Mirror relationships

Figure 8-315 Monitoring background copy process for Global Mirror relationships

Note: Setting up SNMP traps for the SVC enables automatic notification when Global Mirror consistency groups or relationships change state.

8.15.11 Stopping and restarting Global Mirror


Now that the Global Mirror consistency group and relationships are running, we now describe how to stop, restart, and change the direction of the stand-alone Global Mirror relationships, as well as the consistency group. In this section, we show how to stop and restart the stand-alone Global Mirror relationships and the consistency group.

8.15.12 Stopping a stand-alone Global Mirror relationship.


To stop a Global Mirror relationship while enabling access (write I/O) to the secondary VDisk, we select the relationship and Stop Copy Process from the drop-down menu and click Go, as shown in Figure 8-316.

Chapter 8. SVC operations using the GUI

627

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-316 Stopping a stand-alone Global Mirror relationship

As shown in Figure 8-317, we check the Enable write access... option and click OK to stop the Global Mirror relationship.

Figure 8-317 Enable access to the secondary VDisk while stopping the relationship

As shown in Figure 8-318, the Global Mirror relationship transits to the Idling state when stopped, while enabling write access to the secondary VDisk.

Figure 8-318 Viewing Global Mirror relationships

8.15.13 Stopping a Global Mirror consistency group.


As shown in Figure 8-319, we select the Global Mirror consistency group and Stop Copy Process from the drop-down menu and click Go.

Figure 8-319 Selecting the Global Mirror consistency group to be stopped

628

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

As shown in Figure 8-320, we click OK without specifying the Enable write access... option to the secondary VDisk.

Figure 8-320 Stopping the consistency group without enabling access to the secondary VDisk

The consistency group enters the Consistent stopped state when stopped. Afterwards if we want to enable access (write I/O) to the secondary VDisks, we can reissue the Stop Copy Process, specifying access to be enabled to the secondary VDisks. In Figure 8-321, we select the Global Mirror relationship and Stop Copy Process from the drop-down menu and click Go.

Figure 8-321 Selecting the Global Mirror consistency group

As shown in Figure 8-322, we check the Enable write access... check box and click OK. .

Figure 8-322 Enabling access to the secondary VDisks

When applying the Enable write access... option, the consistency group transits to the Idling state, as shown in Figure 8-323.

Chapter 8. SVC operations using the GUI

629

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-323 Viewing the Global Mirror consistency group after write access to the secondary VDisk

8.15.14 Restarting a Global Mirror Relationship in the Idling state


When restarting a Global Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary VDisk in any of the Global Mirror relationships in the consistency group, then consistency will have been compromised. In this situation, we must check the Force option to start the copy process or the command will fail. As shown in Figure 8-324, we select the Global Mirror relationship and Start Copy Process from the drop-down menu and click Go.

Figure 8-324 Starting stand-alone Global Mirror relationship in the Idling state

As shown in Figure 8-325, we check the Force option, since write I/O has been performed while in the Idling state, and we select the copy direction by defining the master VDisk as the primary, and click OK.

Figure 8-325 Restarting the copy process

630

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

The Global Mirror relationship enters the Consistent copying state. When the background copy is complete, the relationship transits to the Consistent synchronized state, as shown in Figure 8-326.

Figure 8-326 Viewing the Global Mirror relationship

8.15.15 Restarting a Global Mirror consistency group in the Idling state


When restarting a Global Mirror consistency group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary VDisk in any of the Global Mirror relationships in the consistency group, then consistency will have been compromised. In this situation, we must check the Force option to start the copy process or the command will fail. As shown in Figure 8-327, we select the Global Mirror consistency group and Start Copy Process from the drop-down menu and click Go.

Figure 8-327 Starting the copy process for Global Mirror consistency group

As shown in Figure 8-328, we check the Force option and set the copy direction by selecting the auxiliary as the master.

Chapter 8. SVC operations using the GUI

631

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-328 Restarting the copy process for the consistency group

When the background copy completes, the Global Mirror consistency group enters the Consistent synchronized state, as shown in Figure 8-329.

Figure 8-329 Viewing Global Mirror consistency groups

Also shown in Figure 8-330 are the individual relationships within that consistency group.

Figure 8-330 Viewing Global Mirror relationships

8.15.16 Changing copy direction for Global Mirror


When a stand-alone Global Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship. In Figure 8-331, we select the relationship GMREL3 and Switch Copy Direction from the drop-down menu and click Go.

632

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transits from primary to secondary, because all I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Global Mirror relationship.

Figure 8-331 Selecting the relationship for which the copy direction is to be changed

In Figure 8-332, we see that the current primary VDisk is the master, so to change the copy direction for the stand-alone Global Mirror relationship, we specify the auxiliary VDisk to be the primary, and click OK.

Figure 8-332 Selecting the primary VDisk as auxiliary to switch the copy direction

The copy direction is now switched and we are returned to the Viewing Global Mirror Relationship window, where we see that the copy direction has been switched, as shown in Figure 8-333.

Figure 8-333 Viewing Global Mirror relationship after changing the copy direction

Chapter 8. SVC operations using the GUI

633

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.15.17 Switching copy direction for a Global Mirror consistency group


When a Global Mirror consistency group is in the Consistent synchronized state, we can change the copy direction for the Global Mirror consistency group. In Figure 8-334 on page 634, we select the consistency group CG_W2K3_GM and Switch Copy Direction from the drop-down menu and click Go. Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisks that transits from primary to secondary, because all I/O will be inhibited when they become the secondary. Therefore, careful planning is required prior to switching the copy direction.

Figure 8-334 Selecting the consistency group for which the copy direction is to be changed

In Figure 8-335, we see that currently the primary VDisks are also the master. So, to change the copy direction for the Global Mirror consistency group, we specify the auxiliary VDisks to become the primary, and click OK.

Figure 8-335 Selecting the primary VDisk as auxiliary to switch the copy direction

The copy direction is now switched and we are returned to the Viewing Global Mirror Consistency Group window, where we see that the copy direction has been switched. Figure 8-336 shows that the auxiliary is now the primary.

634

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-336 Viewing Global Mirror consistency groups after changing the copy direction

Figure 8-337 shows the new copy direction for individual relationships within that consistency group.

Figure 8-337 Viewing Global Mirror Relationships, after changing copy direction for Consistency Group.

As everything has been completed to our expectations, we are now finished with Global Mirror.

Chapter 8. SVC operations using the GUI

635

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.16 Service and maintenance


This section discusses the various service and maintenance tasks that you can perform within the SVC environment. To perform all of the following activities, in the SVC Welcome screen (Figure 8-338), select the Service and Maintenance option. Note: You are prompted for a cluster user ID and password for some of the following tasks.

Figure 8-338 Service and Maintenance functions

8.17 Upgrading software


This section explains how to upgrade the SVC software.

8.17.1 Package numbering and version


The format for the software upgrade package name ends in four positive integers separated by dots. For example, a software upgrade package may have the name IBM_2145_INSTALL_5.1.0.0.

8.17.2 Upgrade status utility


A function of the master console is to check the software levels in the system against recommended levels that will be documented on the support Web site. You are informed if software levels are up-to-date, or if you need to download and install newer levels. This information is provided after you log in to the SVC GUI. In the middle of the Welcome screen, 636
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

you will see that new software is available. Use the link that is provided there to download the new software and get more information about it. Important: To use this feature, the SSPC/Master Console must be able to access the Internet. If the SSPC cannot access the Internet because of restrictions such as a local firewall, you will see the message The update server cannot be reached at this time. Use the Web link provided in the message for the latest software information.

8.17.3 Precautions before upgrade


In this section, we describe precautions you should take before attempting an upgrade. Important: Before attempting any SVC code update, you should read and understand the SAN volume controller concurrent compatibility and code cross reference matrix. Go to the following site and click the link for Latest SAN Volume Controller code: http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001707 During the upgrade, each node in your cluster will be automatically shut down and restarted by the upgrade process. Since each node in an I/O group provides an alternate path to VDisks, you need to make sure that all I/O paths between all hosts and SANs are working. This is done with SDD. If you have not performed this check, then some hosts might lose connectivity to their VDisk and experience I/O errors when the SVC node providing that access is shut down during the upgrade process (Example 8-1).
Example 8-1 Using datapath query commands to check all paths are online

C:\Program Files\IBM\SDDDSM>datapath query adapter Active Adapters :2 Adpt# 0 1 Name Scsi Port2 Bus0 Scsi Port3 Bus0 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 167 137 Errors 0 0 Paths 4 4 Active 4 4

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 37 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 29 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================
Chapter 8. SVC operations using the GUI

637

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Path# 0 1 2 3

Scsi Scsi Scsi Scsi

Adapter/Hard Disk Port2 Bus0/Disk2 Part0 Port2 Bus0/Disk2 Part0 Port3 Bus0/Disk2 Part0 Port3 Bus0/Disk2 Part0

State OPEN OPEN OPEN OPEN

Mode NORMAL NORMAL NORMAL NORMAL

Select 0 130 108 0

Errors 0 0 0 0

You can check the I/O paths by using datapath query commands, as shown in Example 8-1 on page 637. You do not need to check for hosts that have no active I/O operations to the SANs during the software upgrade. Tip: See the Subsystem Device Driver User's Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540 for more information about datapath query commands. It is well worth double checking that your UPS power configuration is also set up correctly (even if your cluster is running without problems). Specifically: Ensure that your UPSs are all getting their power from an external source, and that they are not daisy chained. In other words, make sure that each UPS is not supplying power to another nodes UPS. Ensure that the power cable, and the serial cable coming from the back of each node, goes back to the same UPS. If the cables are crossed and are going back to different UPSs, then during the upgrade, as one node is shut down, another node might also be mistakenly shut down.

8.17.4 SVC software upgrade test utility


This is an SVC software utility that checks for known issues that could cause problems during an SVC software upgrade. It can be run on any SVC cluster running level 4.1.0.0 or above. It is available from the following location: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 The 'svcupgradetest' utility can be used to check for known issues which may cause problems during a SAN Volume Controller software upgrade. It can be used to check for potential problems upgrading from V4.1.0.0 and all later releases to the latest available level. The utility can be run multiple times on the same cluster to perform a readiness check in preparation for a software upgrade. We strongly recommend running this utility for a final time immediately prior to applying the SVC upgrade, making sure that there have not been any new releases of the utility since it was originally downloaded. Once installed, the version information for this utility can be found by running 'svcupgradetest -h'. The installation and usage of this utility is non-disruptive and does not require any SVC nodes to be restarted so there is no interruption to host I/O. The utility will only be installed on the current configuration node. System Administrators should continue to check if the version of code they plan to install is the latest version. Information about the latest information can be found here: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code

638

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

This utility is intended to supplement rather than duplicate the existing tests carried out by the SVC upgrade procedure (e.g. checking for unfixed errors in the error log). The upgrade test utility includes command line parameters.

Prerequisites
This utility can only be installed on clusters running SVC V4.1.0.0 or later.

Installation Instructions
To use the upgrade test utility, follow these steps: Download the latest version of the upgrade test utility (IBM2145_INSTALL_svcupgradetest_V.R) using the download link http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 The utility package can be installed using the standard SVC Console (GUI) or command-line (CLI) software upgrade procedures that are used to install any new software onto the cluster. An example CLI command to install the package, once uploaded to the cluster, is svcservicetask applysoftware -file IBM2145_INSTALL_svcupgradetest_X.XX Run the upgrade test utility by logging onto the SVC command line interface and running 'svcupgradetest -v <V.R.M.F>'. Where V.R.M.F is the version number of the SVC release being installed. For Example, if upgrading to SVC V5.1.0.0, the command would be 'svcupgradetest -v 5.1.0.0' The output from the command will either state that there have been no problems found, or will direct you to details about any known issues which have been discovered on this cluster. Example 8-2 shows the command to test an upgrade.
Example 8-2 Run an upgrade test

IBM_2145:ITSO-CLS2:admin>svcupgradetest svcupgradetest version 4.11. Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade.

8.17.5 Upgrade procedure


To upgrade the SVC cluster software, perform the following steps: 1. Use the Run Maintenance Procedure in the GUI and correct all open problems first, as described in 8.17.6, Running maintenance procedures on page 645. 2. Back up the SVC Config, as described in 8.18.1, Backup procedure on page 669. 3. Back up the support data, just in case there is a problem during the upgrade that renders a node unusable. This information could assist IBM Support in determining why the upgrade might have failed and help with a resolution. Example 8-3 shows the necessary commands that need to be run. This command is only available in the CLI.
Example 8-3 Creating an SVC snapshot

IBM_2145:ITSO-CLS2:admin>svc_snap Collecting system information...


Chapter 8. SVC operations using the GUI

639

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Copying files, please wait... Copying files, please wait... Dumping error log... Creating snap package... Snap data collected in /dumps/snap.100047.080617.002334.tgz

Note: You can ignore the error message No such file or directory. Select Software Maintenance List Dumps Software Dumps, download the dump that was created in Example 8-3 on page 639, and store it in a safe place with the SVC Config that you created previously (see Figure 8-339 and Figure 8-340).

Figure 8-339 Getting software dumps

640

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-340 Downloading software dumps

4. From the SVC Welcome screen, click the Service and Maintenance option and then the Upgrade Software link. 5. In the Upgrade Software window shown in Figure 8-341, you can either upload a new software upgrade file or list the upgrade files. Click the Upload button to upload the latest SVC cluster code.

Figure 8-341 Update Software window

6. In the Software Upgrade (file upload) window (Figure 8-342 on page 642), type or browse to the directory on your management workstation (for example, master console) where you stored the latest code level and click Upload.

Chapter 8. SVC operations using the GUI

641

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-342 Software upgrade (file upload)

7. The File Upload window (Figure 8-343) is displayed if the file is uploaded. Click Continue.

Figure 8-343 File upload

8. The Select Upgrade File window (Figure 8-344) lists the available software packages. Make sure the radio button next to the package you want to apply is selected. Click the Apply button.

Figure 8-344 Select Upgrade File

9. In the Confirm Upgrade File window (Figure 8-345 on page 643), click the Confirm button

642

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-345 Confirm Upgrade File

10.After this confirmation, the SVC will check if there are any outstanding errors. If there are no errors, click Continue, as shown in Figure 8-346, to proceed to the next upgrade step. Otherwise, the Run Maintenance button is displayed, which is used to check the errors. For more information about how to use the maintenance procedures, see 8.17.6, Running maintenance procedures on page 645.

Figure 8-346 Check Outstanding Errors window

11.The Check Node Status window shows the in-use nodes with their current status displayed, as shown in Figure 8-347. Click Continue to proceed.

Figure 8-347 Check Node Status window

12.The Start Upgrade window is displayed. Click the Start Software Upgrade button to start the software upgrade, as shown in Figure 8-348 on page 644.

Chapter 8. SVC operations using the GUI

643

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-348 Start Upgrade software window

The upgrade will start by upgrading one node in each I/O group. 13.The Software Upgrade Status window (Figure 8-349) opens. Click the Check Upgrade Status button periodically. This process might take a while to complete. If the software is completely upgraded, you should get a software completed message and the code level of the cluster and nodes will show the newly applied software level.

Figure 8-349 Software Upgrade Status

14.During the upgrade process, you can only issue informational commands. All task commands such as the creation of a VDisk (as shown in Figure 8-350) are denied. This applies to both the GUI and the CLI. All tasks, such as creation, modifying, mapping, and deleting, are denied.

Figure 8-350 Denial of a task command during the software update

644

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

15.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted in turn. Although unlikely, if the concurrent code load (CCL) fails, for example, if one node fails to accept the new code level, then the update on that one node will be backed out, and the node will revert back to the original code level. From 4.1.0 onwards, the update will simply wait for user intervention. For example, if there are two nodes (A and B) in an I/O group, and node A has been upgraded successfully, and then node B then suffers a hardware failure, the upgrade will end with an I/O group that has a single node at the higher code level. If the hardware failure is repaired on node B, the CCL will then complete the code upgrade process.
I

Tip: Be patient! After the software update is applied, the first SVC node in a cluster will update and install the new SVC code version shortly afterwards. If there is more than one I/O group (up to four I/O groups are possible) in an SVC cluster, the second node of the second I/O group will load the new SVC code and restart with a 10 minute delay to the first node. A 30 minute delay between the update of the first node and the second node in an I/O group ensures that all paths, from a multipathing point of view, are available again. An SVC cluster update with one I/O group takes approximately one hour. 16.If you run into an error, go to the Analyze Error Log window. Search for Software Install completed. Select the radio button Sort by date with the newest first and then click Perform. This should list the software near the top. For more information about how to work with the Analyze Error Log window, see 8.17.10, Analyzing the error log on page 655. You might also find it worthwhile to capture information for IBM support to help you diagnose what went wrong. You have now completed the tasks required to upgrade the SVC software. Click the X icon in the upper right corner of the display area to close the Upgrade Software window. Do not close the browser by mistake.

8.17.6 Running maintenance procedures


To run the maintenance procedures on the SVC cluster, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance option and then the Run Maintenance Procedures link. 2. Click Start Analysis, as shown in Figure 8-351. This will analyze the cluster log and guide you through the maintenance procedures.

Figure 8-351 Maintenance Procedures

3. This generates a new error log file in the /dumps/elogs/ directory (Figure 8-352 on page 646). We can see also the list of the errors as showed in Figure 8-352 on page 646.
Chapter 8. SVC operations using the GUI

645

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-352 Maintenance error log with unfixed errors

4. Click the error number in the Error Code column in Figure 8-352. This gives you the explanation for this error, as shown in Figure 8-353.

Figure 8-353 Maintenance: error code description

5. To perform problem determination, click Continue. It will now display the details for the error and may give you some options to diagnose/repair the problem. In this case, it asks for you to check an external configuration and then press Continue (Figure 8-354 on page 647).

646

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-354 Maintenance procedures: fixing Stage 2

6. The SVC maintenance procedure has completed and the error is fixed shown in Figure 8-355.

Figure 8-355 Maintenance procedure: fixing Stage 3

7. The discovery reported no new errors, so the entry in the error log is now marked as fixed (as shown in Figure 8-356). Click OK.

Figure 8-356 Maintenance procedure: fixed

8.17.7 Setting up error notification


To set up error notification, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance option and then the Set SNMP Error Notifications link.

Chapter 8. SVC operations using the GUI

647

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-357 Setting SNMP error notification

2. To add the IP address of your SNMP Manager, (optional) port, and community string to use (Figure 8-358). Select Add Server and click GO. Note: Depending on what IP protocol addressing is configured, it will display options for IPV4, IPV6, or both.

Figure 8-358 Set the SNMP settings

3. The next window now displays confirmation that it has updated the settings, as shown in Figure 8-359 on page 649.

648

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-359 Error Notification settings confirmation

4. The next window now displays the current status, as shown in Figure 8-360.

Figure 8-360 Current error notification settings

5. You can now click the X icon in the upper right corner of the Set SNMP Error Notification frame to close this window.

8.17.8 Set Syslog Event Notification


Starting with SVC 5.1 it is possible to save a Syslog to a defined Syslog Server. The SVC shall provide support for Syslog in addition to Email and SNMP traps. Figure 8-361, Figure 10-25 and Figure 10-26 shows the sequence of panels to define a Syslog Server.

Chapter 8. SVC operations using the GUI

649

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-361 Syslog Server add

Figure 8-362 shows the syslog server definition.

Figure 8-362 Syslog Server definition

650

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-363 Syslog server confirmation

The Syslog messages can be sent in Compact Message Format or Full message Format. Example 8-4 Shows a compact format syslog message.
Example 8-4 Compact syslog message example

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 Example 8-5 on page 651 shows a full format syslog message.
Example 8-5 Full Format Syslog message example.

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2 #NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0 (build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234 #AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000#Additional Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000

8.17.9 Set e-mail features


The SVC GUI provides the ability to support the SVC e-mail Error Notification service.The SVC uses the e-mail server to send event notification and inventory e-mails to e-mail users. It can transmit any combination of error, warning, and informational notification types. To run the e-mail service for the first time, the Web pages will guide us through the required steps: Set the e-mail server and contact details Test the e-mail service Figure 8-364 shows the set email notification screen.

Chapter 8. SVC operations using the GUI

651

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-364 Set email notification screen

Figure 8-365 on page 652 shows how to insert contact details.

Figure 8-365 Contact details

Figure 8-366 shows the insert contact details confirmation window.

Figure 8-366 Contact details confirmation

Figure 8-367 on page 653 shows how to configure the SMTP server in the SVC cluster. 652

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-367 SMTP server definition

Figure 8-368 SMTP server definition confirmation.

Figure 8-368 SMTP definition confirmation

Figure 8-369 on page 653 shows how to define the support e-mail where SVC notifications will be sent.

Figure 8-369 e-mail notification user Chapter 8. SVC operations using the GUI

653

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-370 shows how to start the e-mail service.

Figure 8-370 Start e-mail service

Figure 8-371 shows how to start the Test e-mail process.

Figure 8-371 Test e-mail

Figure 8-372 on page 654 shows how to send a test e-mail to all users.

Figure 8-372 Test e-mail to all users

Figure 8-373 shows how to confirm the test e-mail notification.

654

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-373 Test e-mail notification confirmation

8.17.10 Analyzing the error log


The following types of events and errors are logged in the error log: Events: State changes that are detected by the cluster software and that are logged for informational purposes. Events are recorded in the cluster error log. Errors: Hardware or software problems that are detected by the cluster software and that require some sort of repair. Errors are recorded in the cluster error log. Unfixed errors: Errors that were detected and recorded in the cluster error log and that were not yet corrected or repaired. Fixed errors: Errors that were detected and recorded in the cluster error log and that were subsequently corrected or repaired. To display the error log for analysis, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance options and then the Analyze Error Log link. 2. From the Error Log Analysis window (Figure 8-374 on page 656), you can choose either the Process or Clear Log button.

Chapter 8. SVC operations using the GUI

655

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-374 Analyzing the error log

a. Select the appropriate radio buttons and click the Process button to display the log for analysis. The Analysis Options and Display Options radio button boxes allow you to filter the results of your log inquiry to reduce the output. b. You can display the whole log, or you can filter the log so that only errors, events, or unfixed errors are displayed. You can also sort the results by selecting the appropriate display options. For example, you can sort the errors by error priority (lowest number = most serious error) or by date. If you sort by date, you can specify whether the newest or oldest error displays at the top of the table. You can also specify the number of entries you want to display on each page of the table. (Figure 8-375 on page 657), shows an example of the error log s listed.

656

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-375 Analyzing Error Log: Process

c. Click an underlined sequence number; this gives you the detailed log of this error (Figure 8-376).

Chapter 8. SVC operations using the GUI

657

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-376 Analyzing Error Log: Detailed Error Analysis

d. You can optionally display detailed sense code data by pressing the Sense Expert button shown in Figure 8-377 on page 659. Press Return to go back to the detailed error Analysis window.

658

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-377 Decoding Sense Data

e. If the log entry is an error, you have the option of marking the error as fixed. This does not run through any other checks/processes, so we recommend that you do this as a maintenance procedures task (see 8.17.6, Running maintenance procedures on page 645). f. Click the Clear Log button at the bottom of the Error Log Analysis window (see Figure 8-374)to clear the log. If the error log contains unfixed errors, a warning message is displayed when you click Clear Log. 3. You can now click the X icon in the upper right corner of the Analyze Error Log window.

8.17.11 License settings


To change license settings, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance options and then the License Settings link as shown in Figure 8-378 on page 660.

Chapter 8. SVC operations using the GUI

659

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-378 License setting

2. Now you can choose from Capacity Licensing or Physical Disk Licensing. Figure 8-379 shows the Physical Disk Licensing Settings panel.

Figure 8-379 Physical Disk Licensing Setting Panel

Figure 8-380 on page 661 shows the Capacity Licensing Settings panel.

660

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-380 Capacity License Setting panel

3. In the License Settings window (Figure 8-381), consult your license before you make changes in this window. If you purchased additional features (for example, FlashCopy or Global Mirror) or if you increased the capacity of your license, make the appropriate changes. Then click the Update License Settings button.

Figure 8-381 License Settings

4. You now see a license confirmation window, as shown in Figure 8-382 on page 662. Review this window and ensure that you are in compliance. If you are in compliance, click I Agree to make the requested changes take effect.

Chapter 8. SVC operations using the GUI

661

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-382 License agreement

5. You return to the Update License Settings review window (Figure 8-383), where your changes should be reflected.

Figure 8-383 Featurization settings update

6. You can now click the X icon in the upper right corner of the License Settings window.

8.17.12 Viewing the license settings log


To view the feature log, which registers the events related to the SVC licensed features, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance option and then the View License Settings Log link. 2. The License Log window (Figure 8-384 on page 663) opens. It displays the current license settings and a log of when changes were made.

662

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-384 Feature log

3. You can now click the X icon in the upper right corner of the View License Settings Log window.

8.17.13 Dumping the cluster configuration


1. To dump your cluster configuration click the Service and Maintenance option and then the Dump Configuration link as shown in Figure 8-385.

Figure 8-385 Dump Configuration

8.17.14 Listing dumps


To list the dumps that were generated, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance option and then the List Dumps link. 2. In the List Dumps window (Figure 8-386 on page 664), you see several dumps and log files that were generated over time on this node. They include the configuration dump we

Chapter 8. SVC operations using the GUI

663

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

generated in Example 8-3 on page 639. Click any of the available links (the underlined text in the table under the List Dumps heading) to go to another window that displays the available dumps. To see the dumps on the other node, you must click Check other nodes. Note: By default, the dump and log information that is displayed is available from the configuration node. In addition to these files, each node in the SVC cluster keeps a local software dump file. Occasionally, other dumps are stored on them. Click the Check Other Nodes button at the bottom of the List Dumps window (Figure 8-386) to see which dumps or logs exist on other nodes in your cluster.

Figure 8-386 List Dumps

3. Figure 8-387 shows the list of dumps from the partner node. You can see a list of the dumps by clicking one of the Dump Types.

Figure 8-387 List Dumps from the partner node

4. To copy a file from this partner node to the config node, click the dump type and then click the file you want to copy, as shown in Figure 8-388 on page 665.

664

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-388 Copy dump files

5. You will see a confirmation window that the dumps are being retrieved. You can either Continue working with the other node or Cancel back to the original node (Figure 8-389).

Figure 8-389 Retrieve dump confirmation

6. After all the necessary files are copied to the SVC config node, click Cancel to finish the copy operation, and Cancel again to return to the SVC config node. Now, for example, if you click the Error Logs link, you should see information similar to that shown in Figure 8-390 on page 666.

Chapter 8. SVC operations using the GUI

665

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-390 List Dumps: Error Logs

7. From this window, you can perform either of the following tasks: Click any of the available log file links (indicated by the underlined text) to display the log in complete detail, Delete one or all of the dump or log files. To delete all, click the Delete All button. To delete some, select the radio button or buttons to the right of the file and click the Delete button. 8. You can now click the X icon in the upper right corner of the List Dumps window.

8.17.15 Setting up a quorum disk


The SVC cluster, after the process of node discovery, automatically chooses three MDisks as quorum disks. Each disk is assigned an index number of either 0, 1, or 2. In the event that half the nodes in a cluster are missing for any reason, the other half cannot simply assume that the nodes are dead. It might just mean that the cluster state information is not being successfully passed between nodes for some reason (network failure, for example). For this reason, if half the cluster disappears from the view of the other, each surviving half attempts to lock the first quorum disk (index 0). In the event of quorum disk index 0 not available on any node, the next disk (index 1) becomes the quorum, and so on. The half of the cluster that is successful in locking the quorum disk becomes the exclusive processor of I/O activity. It attempts to reform the cluster with any nodes it can still see. The other half will stop processing I/O. This provides a tie-breaker solution and ensures that both halves of the cluster do not continue to operate. In the case that all clusters can see the quorum disk, then they will use this quorum disk to communicate with each other, and will decide which half will become the exclusive processor of I/O activity.

666

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

If, for any reason, you want to set your own quorum disks (for example, if you have installed additional back-end storage and you want to move one or two quorum disks onto this newly installed back-end storage subsystem), complete the following tasks: From the Welcome screen select Work with Managed Disks, then select Quorum Disks and that will take you to the window shown in Figure 8-391.

Figure 8-391 Select quorum disk

We can now select our quorum disks and identify which is to be the active one. To change the active quorum, as shown in Figure 8-392 on page 667, we start by selecting the MDisk we want to contain our quorum disk.

Figure 8-392 Selecting a new active Quorum disk

We confirm that we want to change the active quorum disk as shown in Figure 8-393.

Figure 8-393 Confirming the change of active quorum disk

After we have changed the active quorum we can see that our previous active quorum disk is in the state of initializing as shown in Figure 8-394.

Chapter 8. SVC operations using the GUI

667

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-394 Quorum disk initializing

Shortly afterwards we have a successful change as shown in Figure 8-395 on page 668.

Figure 8-395 New quorum disk is now active

Quorum disks are only created if at least one MDisk is in managed mode (that is, it was formatted by the SVC with extents in it). Otherwise, a 1330 cluster error message is displayed in the SVC front window. You can correct it only when you place MDisks in managed mode.

8.18 Backing up the SVC configuration


The SVC configuration data is stored on all the nodes in the cluster. It is specially hardened so that, in normal circumstances, the SVC should never lose its configuration settings. However, in exceptional circumstances, this data could become corrupted or lost. This section details the tasks that you can perform to save the configuration data from an SVC configuration node and restore it. The following configuration information is backed up: Storage subsystem Hosts Managed disks (MDisks) Managed Disk Groups (MDGs) SVC nodes Virtual disks VDisk-to-host mappings FlashCopy mappings FlashCopy consistency groups Mirror relationships Mirror consistency groups

668

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Backing up the cluster configuration enables you to restore your cluster configuration in the event that it is lost. But only the data that describes the cluster configuration is backed up. In order to back up your application data, you need to use the appropriate backup methods. To begin the restore process, consult IBM Support to determine the cause as to why you cannot access your original configuration data. The prerequisites for having a successful backup are as follows: All nodes in the cluster must be online. No object name can begin with an underscore (_). Do not run any independent operations that could change the cluster configuration while the backup command runs. Do not make any changes to the fabric or cluster between backup and restore. If changes are made, back up your configuration again or you might not be able to restore it later. Note: We recommend that you make a backup of the SVC configuration data after each major change in the environment, such as defining or changing a VDisks, VDisk-to-host mappings, and so on. The svc.config.backup.xml file is stored in the /tmp folder on the configuration node and must be copied to an external and secure place for backup purposes. Important: We strongly recommend that you change the default names of all objects to non-default names. For objects with a default name, a warning is produced and the object is restored with its original name and _r appended to it.

8.18.1 Backup procedure


To backup the SVC configuration data, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance option and then the Backup Configuration link. 2. In the Backing up a Cluster Configuration window (Figure 8-396), click the Backup button.

Figure 8-396 Backing up Cluster Configuration data

3. After the configuration backup is successfully done, you see messages similar to the ones shown in Figure 8-397 on page 670. Make sure you that you read, understand, act upon, and document the warning messages, since they can influence the restore procedure.

Chapter 8. SVC operations using the GUI

669

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 8-397 Configuration backup successful message and warnings

4. You can now click the X icon in the upper right corner of the Backing up a Cluster Configuration window. Note: To avoid getting the CMMVC messages that are shown in Figure 8-397, you need to replace all the default names, for example, mdisk1, vdisk1, and so on.

8.18.2 Saving the SVC configuration


To save the SVC configuration in a safe place proceed as follows: From the List Dump panel select Software Dumps and then right click on the configuration dump you want to save. Figure 8-398 on page 671shows Software Dumps List with the save option.

670

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-398 Software Dumps list with options

After you have saved your configuration file, it will be presented to you as an .xml file. Figure 8-399 shows an SVC backup configuration file example.

Figure 8-399 .xml SVC Backup Configuration file

Chapter 8. SVC operations using the GUI

671

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

8.18.3 Restoring the SVC configuration


It is very important that you perform the configuration backup described in 8.18.1, Backup procedure on page 669 periodically, and every time after you change the configuration of your cluster. Carry out the restore procedure only under the direction of IBM Level 3 support.

8.18.4 Deleting the configuration backup files


This section details the tasks that you can perform to delete the configuration backup files from the default folder in the SVC master console. You can do this if you have already copied them to another external and secure place. To delete the SVC Configuration backup files, perform the following steps: 1. From the SVC Welcome screen, click the Service and Maintenance options and then the Delete Backup link. 2. In the Deleting a Cluster Configuration window (Figure 8-400), click the OK button to confirm the deletion. This deletes the C:\Program Files\IBM\svcconsole\cimom\backup\SVCclustername folder (where SVCclustername is the SVC cluster name on which you are working) on the SVC master console and all its contents.

Figure 8-400 Deleting a Cluster Configuration

3. Click Delete to confirm the deletion of the configuration backup data. See Figure 8-401.

Figure 8-401 Deleting a Cluster Configuration confirmation message

4. The cluster configuration is now deleted.

8.18.5 Fabrics
From the Fabrics Link in the Service and Maintenance Panel it is possible to get a View of the Fabrics from the SVCs point of view. This could be useful in order to debug a SAN problem. Figure 8-402 on page 673 shows a Viewing Fabrics example.

672

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch08.GUI Operations Pall.fm

Figure 8-402 Viewing Fabrics example

8.18.6 CIMOM Log Configuration


Because the CIMOM has been moved from the HMC (SSPC) to the SVC cluster starting with SVC 5.1, it is possible to configure the SVC CIMOM Log using the GUI, in order to set the detail logging level. Figure 8-403 Shows the CIMOM Configuration Log screen.

Figure 8-403 CIMOM Configuration Log screen

This ends the Service and Maintenance operational tasks.

Chapter 8. SVC operations using the GUI

673

6423ch08.GUI Operations Pall.fm

Draft Document for Review January 12, 2010 4:41 pm

674

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Chapter 9.

Data migration
In this chapter, we explain how to migrate from a conventional storage infrastructure to a virtualized storage infrastructure applying the SVC. We also explain how the SVC can be phased out of a virtualized storage infrastructure, for example, after a trial period or just because you wanted to use the SVC as a data mover because it was offering you the best chance to fit your requirements in term of data migration performance, or because it gave the best SLA to your application during the data migration. Moreover we will show how it is possible to migrate from a fully allocated VDisk to a Space Efficient VDisk using the VDisk Mirroring feature and Space Efficient Volume together. We will show also an example of how to use intracluster Metro Mirror in order to migrate data.

Copyright IBM Corp. 2009. All rights reserved.

675

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

9.1 Migration overview


The SVC allows the mapping of Virtual Disk (VDisk) extents to Managed Disk (MDisk) extents to be changed, without interrupting host access to the VDisk. This functionality is utilized when performing VDisk migrations, and can be performed for any VDisk defined on the SVC. This functionality can be used for: Redistribution of VDisks, and thereby the workload within an SVC cluster across back-end storage: Moving workload onto newly installed storage Moving workload off old/failing storage, ahead of decommissioning it Moving workload to re-balance a changed workload Migrating data from older back-end storage to SVC managed storage Migrating data from one back-end controller to another using the SVC as a data block mover and afterwards removing the SVC from the SAN Migrating data from managed mode back into image mode prior to removing the SVC from a SAN

9.2 Migration operations


Migration can be performed at either the VDisk or the extent level depending on the purpose of the migration. The different supported migration activities are: Migrating extents within a Managed Disk Group (MDG), redistributing the extents of a given VDisk on the MDisks in the MDG Migrating extents off an MDisk, which is removed from the MDG (to other MDisks in the MDG) Migrating a VDisk from one MDG to another MDG Migrating a VDisk to change the virtualization type of the VDisk to Image Migrating a VDisk between I/O Groups

9.2.1 Migrating multiple extents (within an MDG)


A number of VDisk extents can be migrated at once using the migrateexts command. When executed, this command migrates a given number of extents from the source MDisk, where the extents of the VDisk specified resides, to a defined target MDisk that must be part of the same MDG. The number of migration threads that will be used in parallel can be specified from 1 to 4. If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is migrated, while the MDisk access mode transitions from image to managed.

676

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

The syntax of the CLI command is: svctask migrateexts -source src_mdisk_id | src_mdisk_name -exts num_extents -target target_mdisk_id | target_mdisk_name [-threads number_of_threads] -vdisk vdisk_id | vdisk_name The parameters for the CLI command are: -vdisk: Specifies the VDisk ID or name to which the extents belong. -source: Specifies the source Managed Disk ID or name on which the extents currently reside. -exts: Specifies the number of extents to migrate. -target: Specifies the target MDisk ID or name onto which the extents are to be migrated. -threads: Optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.

9.2.2 Migrating extents off an MDisk that is being deleted


When an MDisk is deleted from an MDG using the rmmdisk -force command, any occupied extents on the MDisk are migrated off the MDisk (to other MDisks in the MDG) prior to its deletion. In this case, the extents that need to be migrated are moved onto the set of MDisks that are not being deleted, and the extents are distributed. This statement holds true if multiple MDisks are being removed from the MDG at the same time, and MDisks that are being removed are not candidates for supplying free extents to the allocation of the free extents algorithm. If a VDisk uses one or more extents that need to be moved as a result of an rmmdisk command, then the virtualization type for that VDisk is set to striped (if it was previously sequential or image). If the MDisk is operating in image mode, the MDisk transitions to manage mode while the extents are being migrated, and upon deletion it transitions to unmanaged mode. The syntax of the CLI command is: svctask rmmdisk -mdisk mdisk_id_list | mdisk_name_list [-force] mdisk_group_id | mdisk_group_name The parameters for the CLI command are: -mdisk: Specifies one or more MDisk IDs or names to delete from the group. -force: Migrates any data that belongs to other VDisks before removing the MDisk. Note: If the -force flag is not supplied, and VDisk(s) occupy extents on one or more of the MDisks specified, the command will fail. When the -force flag is supplied, and VDisks exist that are made from extents on one or more of the MDisks specified, all extents on the MDisk(s) will be migrated to the other MDisks in the MDG, if there are enough free extents in the MDG. The deletion of the MDisk(s) is postponed until all extents are migrated, which can take some time. In case there are not enough free extents in the MDG, the command will fail. When the -force flag is supplied, the command will complete asynchronously.

Chapter 9. Data migration

677

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

9.2.3 Migrating a VDisk between MDGs


An entire VDisk can be migrated from one MDG to another MDG using the migratevdisk command. A VDisk can be migrated between MDGs regardless of the virtualization type (image, striped, or sequential), though it will transition to the virtualization type of striped.The command will vary depending on the type of migration, as shown in Table 9-1.
Table 9-1 Migration type MDG to MDG type Managed to managed Image to managed Managed to image Image to image Command migratevdisk migratevdisk migratetoimage migratetoimage

The syntax of the CLI command is: svctask migratevdisk -mdiskgrp mdisk_group_id | mdisk_group_name [-threads number_of_threads -copy_id] -vdisk vdisk_id | vdisk_name The parameters for the CLI command are: -vdisk: Specifies the VDisk ID or name to migrate into another MDG. -mdiskgrp: Specifies the target MDG ID or name. -threads: An optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4. -copy_id: Required if the specified VDisk has more than one copy. The syntax of the CLI command is: svctask migratetoimage -copy_id -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id | name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads] The parameters for the CLI command are: -vdisk: Specifies the name or ID of the source VDisk to be migrated. -copy_id: Required if the specified VDisk has more than one copy. -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk must be unmanaged and large enough to contain the data of the disk being migrated.) -mdiskgrp: Specifies the MDG into which the MDisk must be placed once the migration has completed. -threads: Optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.

678

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

In Figure 9-1 we illustrate how the VDisk V3 is being migrated from MDG1 to MDG2. Important: In order for the migration to be legal, the source and destination MDisk must have the same extent size.

I/O G r o u p 0
S V C 1 IO -G rp 0 Node 1 S V C 1 IO -G r p 0 Node 2

V1

V2

V4

V3

V3

V6

V5

MDG 1

MDG 2

MDG 3

M1

M2

M3

M4

M5

M6

M7

R A ID C o n tr o lle r A

R A ID C o n tr o lle r B

Figure 9-1 Managed VDisk migration to another MDG

Extents are allocated to the migrating VDisk, from the set of MDisks in the target MDG, using the extent allocation algorithm. The process can be prioritized by specifying the number of threads to use while migrating; using only one thread will put the least background load on the system. If a large number of extents are being migrated, you can specify the number of threads that will be used in parallel (from 1 to 4). The offline rules apply to both MDGs, therefore, referring back to Figure 9-1, if any of the MDisks M4, M5, M6, or M7 go offline, then VDisk V3 goes offline. If MDisk M4 goes offline, then V3 and V5 goes offline, but V1, V2, V4, and V6 remain online. If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is migrated while the MDisk access mode transitions from image to managed. For the duration of the move, the VDisk is listed as being a member of the original MDG. For the purposes of configuration, the VDisk moves to the new MDG instantaneously at the end of the migration.

Chapter 9. Data migration

679

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

9.2.4 Migrating the VDisk to image mode


The facility to migrate a VDisk to an image mode VDisk can be combined with the ability to migrate between MDGs. The source for the migration can be a managed mode or an image mode VDisk. This leads to four possibilities: Migrate image mode to image mode within an MDG. Migrate managed mode to image mode within an MDG. Migrate image mode to image mode between MDGs. Migrate managed mode to image mode between MDGs. To be able to migrate: The destination MDisk must be greater than or equal to the size of the VDisk. The MDisk specified as the target must be in an unmanaged state at the time the command is run. If the migration is interrupted by a cluster recovery, then the migration will resume after the recovery completes. If the migration involves moving between Managed Disk groups, then the VDisk behaves as described in 9.2.3, Migrating a VDisk between MDGs on page 678. The syntax of the CLI command is: svctask migratetoimage -copy_id -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id | name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads] The parameters for the CLI command are: -copy_id: Required if the specified VDisk has more than one copy. -vdisk: Specifies the name or ID of the source VDisk to be migrated. -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk must be unmanaged and large enough to contain the data of the disk being migrated.) -mdiskgrp: Specifies the MDG into which the MDisk must be placed once the migration has completed. -threads: An optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4. Regardless of the mode that the VDisk starts in, it is reported as a managed mode during the migration. Also, both of the MDisks involved are reported as being in image mode during the migration. Upon completion of the command, the VDisk is classified as an image mode VDisk.

9.2.5 Migrating a VDisk between I/O groups


A VDisk can be migrated between I/O groups using the svctask chvdisk command. This is only supported if the VDisk is not in a FlashCopy Mapping or Remote Copy relationship. In order to move a VDisk between I/O groups, the cache must be flushed. The SVC will attempt to destage all write data for the VDisk from the cache during the I/O group move. This flush will fail if data has been pinned in the cache for any reason (such as an MDG being offline). By default, this will cause the migration between I/O groups to fail, but this behavior can be overridden using the -force flag. If the -force flag is used and if the SVC is unable to destage all write data from the cache, then the result is that the contents of the VDisk are

680

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

corrupted by the loss of the cached data. During the flush, the VDisk operates in cache write-through mode. Attention: Do not move a VDisk to an offline I/O group under any circumstance. You must ensure that the I/O group is online before you move the VDisks to avoid any data loss. You must quiesce host I/O before the migration for two reasons: If there is significant data in cache that takes a long time to destage, then the command line will time out. SDD vpaths associated with the VDisk are deleted before the VDisk move takes place in order to avoid data corruption. So, data corruption could occur if I/O is still ongoing at a particular LUN ID when it is re-used for another VDisk. When migrating a VDisk between I/O Groups, you do not have the ability to specify the preferred node. The preferred node is assigned by the SVC. The syntax of the CLI command is: svctask chvdisk [-name -new_name_arg][-iogrp -io_group_id | - io_group_name [-force]] [-node -node_id | - node_name [-rate -throttle_rate]] [-unitmb -udid -vdisk_udid] [-warning -disk_size | -disk_size_percentage] [-autoexpand -on | -off [ -copy -id]] [-primary -copy_id][-syncrate -percentage_arg] [vdisk_name | vdisk_id [-unit [-b | -kb | -mb | -gb | -tb | -pb]]] For detailed information about the chvdisk command parameters, refer to the SVC command line interface help typing: svctask chvdisk -h Or refer to the Command Line Interface Users Guide, SG26-7903-05 The chvdisk command modifies a single property of a virtual disk (VDisk). To change the VDisk name and modify the I/O group, for example, you must issue the command twice. A VDisk that is a member of a FlashCopy or Remote Copy relationship cannot be moved to another I/O Group, and this cannot be overridden by using the -force flag.

9.2.6 Monitoring the migration progress


To monitor the progress of ongoing migrations, use the CLI command: svcinfo lsmigrate To determine the extent allocation of MDisks and VDisks, use the following commands: To list the VDisk IDs and the corresponding number of extents the VDisks are occupying on the queried MDisk, use the following CLI command: svcinfo lsmdiskextent <mdiskname | mdisk_id> To list the MDisk IDs and the corresponding number of extents the queried VDisks are occupying on the listed MDisks, use the following CLI command: svcinfo lsvdiskextent <vdiskname | vdisk_id> To list the number of free extents available on an MDisk, use the following CLI command: svcinfo lsfreeextents <mdiskname | mdisk_id>

Chapter 9. Data migration

681

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Important: After a migration has been started, there is no way for you to stop the migration. The migration runs to completion unless it is stopped or suspended by an error condition, or if the VDisk being migrated is deleted.

9.3 Functional overview of migration


This section describes the functional view of data migration.

9.3.1 Parallelism
Some of the activities described below can be carried out in parallel.

Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of migration activities: Migrate multiple extents Migrate between MDGs Migrate off deleted MDisk Migrate to image mode These high-level migration tasks operate by scheduling single extent migrations, as follows: Up to 256 single extent migrations can run concurrently. This number is made up of single extent migrates, which result from the operations listed above. The Migrate Multiple Extents and Migrate Between MDGs commands support a flag that allows you to specify the number of threads to use between 1 and 4. This parameter affects the number of extents that will be concurrently migrated for that migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that operation, subject to other resource constraints.

Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not take into account whether the MDisk is the source or the destination. If more than four single extent migrates are scheduled for a particular MDisk, further migrations are queued pending the completion of one of the currently running migrations.

682

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

9.3.2 Error handling


If a medium error occurs on a read from the source, and the destinations medium error table is full, if an I/O error occurs on a read from the source repeatedly, or if the MDisk(s) go offline repeatedly, the migration is suspended or stopped. The migration will be suspended if any of the following conditions exist, otherwise it will be stopped: The migration is between Managed Disk Groups and has progressed beyond the first extent. These migrations are always suspended rather than being stopped because stopping a migration in progress would leave a VDisk spanning MDGs, which is not a valid configuration other than during a migration. The migration is a Migrate to Image Mode (even if it is processing the first extent). These migrations are always suspended rather than being stopped because stopping a migration in progress would leave the VDisk in an inconsistent state. A migration is waiting for a metadata checkpoint that has failed. If a migration is stopped, then if any migrations are queued awaiting the use of the MDisk for migration, these migrations are now considered. If, however, a migration is suspended, then the migration continues to use resources, and so another migration is not started. The SVC will attempt to resume the migration if the error log entry is marked as fixed using the CLI or the GUI. If the error condition no longer exists, then the migration will proceed. The migration might resume on a different node than the one that started the migration.

9.3.3 Migration algorithm


This section describes the effect of the migration algorithm.

Chunks
Regardless of the extent size for the MDG, data is migrated in units of 16 MB. In this description, this unit is referred to as a chunk. The algorithm used to migrate an extent is as follows: 1. Pause (this means to queue all new I/O requests in the virtualization layer in SVC and wait for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC cluster. The I/O to other extents is unaffected. 2. Unpause I/O on all of the source MDisk extent apart from writes to the specific chunk that is being migrated. Writes to the extent are mirrored to the source and destination. 3. On the node performing the migrate, for each 256 K section of the chunk: Synchronously read 256 K from the source. Synchronously write 256 K to the target. 4. Once the entire chunk has been copied to the destination, repeat the process for the next chunk within the extent. 5. Once the entire extent has been migrated, pause all I/O to the extent being migrated, checkpoint the extent move to on-disk metadata, redirect all further reads to the destination, and stop mirroring writes (writes only to destination). 6. If the checkpoint fails, then the I/O is unpaused.

Chapter 9. Data migration

683

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

During the migration, the extent can be divided into three regions, as shown in Figure 9-2. Region B is the chunk that is being copied. Writes to region B are queued (paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the destination because this data has already been copied. Writes to Region A are written to both the source and the destination extent in order to maintain the integrity of the source extent. Reads and writes to Region C are directed to the source because this region has yet to be migrated. The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During this time, all writes to the chunk from higher layers in the software stack (such as cache destages) are held back. If the back-end storage is operating with significant latency, then it is possible that this operation might take some time (minutes) to complete. This can have an adverse affect on the overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is still active after one minute, then the migration is paused for 30 seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the chunk is resumed. This algorithm is repeated as many times as necessary to complete the migration of the chunk.

Managed Disk Extents Extent N-1 Extent N Extent N+1

Region A (already copied)

Region B (copying) reads/writ

Region C (yet to be copied) reads/writes go

16 MB
Figure 9-2 Migrating an extent

Not to scale

SVC guarantees read stability during data migrations even if the data migration is stopped by a node reset or a cluster shutdown. This is possible because SVC disallows writes on all nodes to the area being copied, and upon a failure the extent migration is restarted from the beginning. At the conclusion of the operation we will have: Extents migrated in 16MB chunks, one chunk at a time. Chunks either copied, in progress, or not copied. When the extent is finished its new location is saved. Figure 9-3 on page 685 shows data migration and write operation relationship.

684

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-3 Migration and write operation relationship

9.4 Migrating data from an image mode VDisk


This section describes how to migrate data from an image mode VDisk to a fully managed VDisk.

9.4.1 Image mode VDisk migration concept


First, we describe the concepts associated with this operation.

MDisk modes
There are three different MDisk modes: 1. Unmanaged MDisk: An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged MDisk is not associated with any VDisks and has no metadata stored on it. The SVC will not write to an MDisk that is in unmanaged mode except when it attempts to change the mode of the MDisk to one of the other modes. 2. Image Mode MDisk: Image Mode provides a direct block-for-block translation from the MDisk to the VDisk with no virtualization. Image Mode VDisks have a minimum size of one block (512 bytes) and always occupy at least one extent. An Image Mode MDisk is associated with exactly one VDisk. 3. Managed Mode MDisk: Managed Mode Mdisks contribute extents to the pool of extents available in the MDG. Zero or more Managed Mode VDisks might use these extents.

Transitions between the different modes


The following state transitions can occur to an MDisk (see Figure 9-4 on page 686): 1. Unmanaged mode to managed mode. This occurs when an MDisk is added to an MDisk group. This makes the MDisk eligible for the allocation of data and metadata extents. 2. Managed mode to unmanaged mode. This occurs when an MDisk is removed from an MDisk group. 3. Unmanaged mode to image mode.
Chapter 9. Data migration

685

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

This occurs when an image mode MDisk is created on an MDisk that was previously unmanaged. It also occurs when an MDisk is used as the target for a Migrate to Image Mode. 4. Image mode to unmanaged mode. There are two distinct ways in which this can happen: When an image mode VDisk is deleted. The MDisk that supported the VDisk becomes unmanaged. When an image mode VDisk is migrated in image mode to another MDisk, the MDisk that is being migrated from remains in image mode until all data has been moved off it. It then transitions to unmanaged mode. 5. Image mode to managed mode. This occurs when the image mode VDisk that is using the MDisk is migrated into managed mode. 6. Managed mode to image mode is not possible. There is no operation that will take an MDisk directly from managed mode to image mode. This can be achieved by performing operations that convert the MDisk to unmanaged mode and then to image mode.
add to group

Not in group
remove from group

Managed mode

delete image mode vdisk start migrate to managed mode

complete migrate

create image mode vdisk

Migrating to image mode

start migrate to image mode

Image mode

Figure 9-4 Different states of a VDisk

Image mode VDisks have the special property that the last extent in the VDisk can be a partial extent. Managed mode disks do not have this property. To perform any type of migration activity on an image mode VDisk, the image mode disk must first be converted into a managed mode disk. If the image mode disk has a partial last extent, then this last extent in the image mode VDisk must be the first to be migrated. This migration is handled as a special case. After this special migration operation has occurred, the VDisk becomes a managed mode VDisk and is treated in the same way as any other managed mode VDisk. If the image mode 686
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

disk does not have a partial last extent, then no special processing is performed, the image mode VDisk is simply changed into a managed mode VDisk, and is treated in the same way as any other managed mode VDisk. After data is migrated off a partial extent, there is no way to migrate data back onto the partial extent.

9.4.2 Migration tips


You have several methods to migrate an image mode VDisk into a managed mode VDisk: If your image mode VDisk is in the same MDG as the MDisks on which you want to migrate the extents, you can: Migrate a single extent. You have to migrate the last extent of the image mode VDisk (number N-1). Migrate multiple extents. Migrate all the in-use extents from an MDisk. Migrate extents off of an MDisk that is being deleted. If you have two MDGs, one for the image mode VDisk, and one for the managed mode VDisks, you can migrate a VDisk from one MDG to another. The recommended method is to have one MDG for all the image mode VDisks, and other MDGs for the managed mode VDisks, and to use the migrate VDisk facility. Be sure to check that enough extents are available in the target MDG.

9.5 Data migration for Windows using the SVC GUI


In this section, we move the two LUNs from a Windows 2008 server that is currently attached to a DS4700 storage subsystem over to the SVC. We then manage those LUNs with SVC, migrate them from an image mode VDisk to a VDisk, migrate one of them back to an image mode Vdisk, and then finally move it to another image mode VDisk on another storage subsystem, so that those LUNs can then be masked/mapped back to the host directly. This would of course also work if we move the LUN back to the same storage subsystem. Using this example will help you perform any one of the following activities in your environment: Move a Microsoft servers SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC. This would be the first activity that you would do when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap/remask disks using your storage subsystem LUN management tool. This step is detailed in 9.5.2, SVC added between the host system and the DS4700 on page 689. Migrate your image mode VDisk to a VDisk while your host is still running and servicing your business application. You might perform this activity if you were removing a storage subsystem from your SAN environment, or wanting to move the data onto LUNs that are more appropriate for the type of data stored on those LUNs taking into account availability, performance, and redundancy. This step is covered in 9.5.4, Migrating the VDisk from image mode to managed mode on page 700.

Chapter 9. Data migration

687

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Migrate your VDisk to an image mode Vdisk. You might perform this activity if you were removing the SVC from your SAN environment after a trial period. This step is detailed in 9.5.5, Migrating the VDisk from managed mode to image mode on page 702. Move an image mode Vdisk to another image mode VDisk. This procedure can be used to migrate data from one storage subsystem to the other. This step is detailed in 9.6.6, Migrate the VDisks to image mode VDisks on page 727. These activities can be used individually, or together, enabling you to migrate your servers LUNs from one storage subsystem to another storage subsystem using SVC as your migration tool. The only downtime required for these activities will be the time it takes you to remask/remap the LUNs between the storage subsystems and your SVC.

9.5.1 Windows 2008 host system connected directly to the DS4700


In our example configuration, we use a Windows 2008 host, a DS4700, and a DS4500. The host does have two LUNs (drive X and Y). The two LUNs are part of one DS4700 array. Before the migration, LUN masking is defined in the DS4700 to give access to the Windows 2008 host system for the volume from DS4700 label X and Z (see Figure 9-6). Figure 9-5 shows the starting zoning scenario.

Figure 9-5 Starting zoning scenario

Figure 9-6 shows the two LUNs (drive X and Y).

688

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-6 Drive x and y

Figure 9-7 shows the properties of one of the DS4700 disks using the Subsystem Device Driver DSM (SDDDSM). The disk appears as an IBM 1814 Fast Multipath Device.

Figure 9-7 Disk properties

9.5.2 SVC added between the host system and the DS4700
Figure 9-8 shows the new environment with the SVC and a second storage subsystem attached to the SAN. The second storage subsystem would not be required to migrate to 689

Chapter 9. Data migration

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

SVC, but in the following examples, we will show that it is possible to move data across storage subsystems without any host downtime.

Figure 9-8 Add SVC and second storage

To add the SVC between the host system and the DS4700 storage subsystem, perform the following steps: 1. Check that you have installed supported device drivers on your host system. 2. Check that your SAN environment fulfills the supported zoning configurations. 3. Shut down the host. 4. Change the LUN masking in the DS4700. Mask the LUNs to the SVC and remove the masking for the host. Figure 9-9 on page 691 shows the two LUNs with UN id. 12 and 13 remapped to SVC ITSO-CLS3.

690

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-9 LUNs remapped

5. Log on to your SVC Console, open Work with Managed Disks and Managed Disks, select Discover Managed Disks in the drop-down field and click Go (Figure 9-10).

Figure 9-10 Discover managed disks

Figure 9-11 on page 692 shows the two LUNs discovered as Mdisk12 and Mdisk13

Chapter 9. Data migration

691

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-11 Mdisk12 and Mdisk13 discovered

6. Now we create one new empty MDG for each MDisk we want to use to create an image VDisk later. Open Work with Managed Disks and Managed Disks Group, select Create an Mdisk Group in the drop-down field and click Go. Figure 9-12 shows MDisk Group creation.

Figure 9-12 MDG creation

7. Click Next . 8. Insert the MDG name, MDG_img_1, do not select any MDisk as shown in Figure 9-13 on page 693, then click Next.

692

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-13 MDG for image VDisk creation

9. Choose the extent size you want to use as shown in Figure 9-14 then click Next. Keep in mind that the extent size you are choosing must be the same in the MDG where you will migrate your data later.

Figure 9-14 extent size selection

10.Now click Finish in order to complete the MDG creation. Figure 9-15 shows the completion screen.

Figure 9-15 Completion screen

Chapter 9. Data migration

693

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

11.Now we create new VDisks named W2k8_Log and W2k8_Data using the two new discovered MDisks in the MDisk group MDG0 as follows: 12.Open the Work with Virtual Disks and Virtual Disks views and as shows in Figure 9-16, select Create an Imagemode VDisk from the list and click Go.

Figure 9-16 image VDisk creation

13.The Create Image Mode Virtual Disk window (Figure 9-17 on page 694) displayed. Click Next.

Figure 9-17 Create Image Mode Virtual Disk window

14.Type the name that you would like to use for the VDisk and select the attributes, in our case, the name is W2k8_Log. Click Next (Figure 9-18).

694

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-18 Set the attributes for the image mode Virtual Disk

15.Select the MDisk to create the image mode virtual disk and click Next (Figure 9-19).

Figure 9-19 Select the MDisk to use for your image disk

16.Select an I/O group, the Preferred Node, and the MDisk group you just created before. Optionally, you can let this system choose these settings (Figure 9-20). Click Next.

Chapter 9. Data migration

695

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-20 Select I/O Group and MDisk Group

Note: If you have more than two nodes in the cluster, select the I/O group of the nodes to evenly share the load. 17.Review the summary and click Finish to create the Image Mode VDisk. Figure 17 on page 696 shows the image VDisk summary and attributes.

Figure 9-21 Verify Attributes

18.Repeat steps 6 through 17 for each LUN you want to migrate to the SVC. 19.In the Viewing Virtual Disk view, we see the two newly created VDisks, as shown in Figure 9-22. In our example, they are named W2k8_log and W2k8_data.

696

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-22 Viewing Virtual DIsks

20.In the MDisk view (Figure 9-23), we see the two new MDisks are now shown as Image Mode Disk. In our example, they are named mdisk12 and mdisk13.

Figure 9-23 Viewing Managed Disks

21.Map the VDisks again to the Windows 2008 host system: 22.Open the Work with Virtual Disks and Virtual Disks view, mark the VDisks and select Map Virtual Disk to a Host, and click Go (Figure 9-24).

Figure 9-24 Map virtual disk to a host

23.Choose the host and enter the SCSI LUN IDs. Click OK (Figure 9-25 on page 698).

Chapter 9. Data migration

697

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-25 Creating Virtual DIsk to host mappings

9.5.3 Put the migrated disks on a Windows 2008 host online


1. Start the Windows 2008 host system again and open the computer management to see the new disk properties changed to 2145 Multi-Path disk Device (Figure 9-26).

Figure 9-26 Disk Management

698

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

2. Figure 9-27 shows the Disk management window

Figure 9-27 Migrated disks are available

3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM to open the SDDDSM command-line utility (Figure 9-28).

Figure 9-28 Subsystem Device Driver DSM CLI

Chapter 9. Data migration

699

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

4. Enter the command datapath query device to check if all paths are available, as planned in your SAN environment (Example 9-1).
Example 9-1 datapath query device

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 2

DEV#: 0 DEVICE NAME: Disk0 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000007 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 180 0 1 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 2 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 145 0 3 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 25 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 164 0 2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 136 0 C:\Program Files\IBM\SDDDSM>

9.5.4 Migrating the VDisk from image mode to managed mode


The VDisk is migrated to managed mode by migrating the completed VDisk as follows: 1. As shown in Figure 9-29 on page 701, select the VDisk. Then select Migrate a VDisk from the list and click Go.

700

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-29 Migrate a VDisk

2. Select the MDG to which to migrate the disk and the number of used threads, as shown in Figure 9-30. Click OK.

Figure 9-30 Migrating virtual disks

Note: If you migrate the VDisks to another MDisk group, the extent size of the source and target managed disk group has to be equal.

Chapter 9. Data migration

701

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

3. The Migration Progress view will appear and enable you to monitor the migration progress (Figure 9-31).

Figure 9-31 View Progress

4. Click the percentage to show more detailed information about this VDisk. During the migration process, the VDisks are still in the old MDisk group. During the migration your server is still accessing the data. After the migration is complete, the VDisk will be in the new MDisk group MDG_DS45 and becomes a striped Vdisk. Figure 9-32 shows the migrated Vdisk in the new MDG.

Figure 9-32 VDisk W2k8_log in new MDisk group

9.5.5 Migrating the VDisk from managed mode to image mode


The VDisk in managed mode can be migrated to image mode. In this example, we migrate a managed VDisk to an image mode VDisk. 1. Create an empty MDG, following the same procedure as shown previously, once for each VDisk you want to migrate to image mode. These MDGs will host the target MDisk we will map later to our server at the end of the migration. 2. Check the VDisk you want to migrate and select Migrate to an image mode VDisk from the drop-down menu (Figure 9-33 on page 703). Click Go.

702

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-33 Migrate to an image mode VDisk

3. The Introduction window appears. Click Next (Figure 9-34).

Figure 9-34 Migrate to an image mode VDisk

4. Select the source VDisk copy and click Next (Figure 9-35).

Figure 9-35 Migrate to a image mode VDisk

Chapter 9. Data migration

703

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

5. Select a target MDisk by clicking the radio button for it (Figure 9-36). Click Next.

Figure 9-36 Select the Target MDisk

6. Select an MDG by clicking the radio button for it (Figure 9-37). Click Next.

Figure 9-37 Select target MDisk group

Note: If you migrate the VDisks to another MDisk group, the extent size of the source and target managed disk group has to be equal.

704

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

7. Select the number of threads (1 to 4). The higher the number, the higher the priority (Figure 9-38). Click Next.

Figure 9-38 Select the number of threads

8. Verify the migration attributes (Figure 9-39) and click Finish.

Figure 9-39 Verify Migration Attributes

9. The progress window will appear. 10.Repeat these steps for every VDisk you want to migrate to an Image Mode VDisk. 11.Free the data from the SVC by using the procedure in 9.5.7, Free the data from the SVC on page 709.

9.5.6 Migrating the VDisk from image mode to image mode


The migrating a VDisk from image mode to image mode process is used to move image mode VDisks from one storage subsystem to another storage subsystem without going through the fully managed mode. The data stays available for the applications during this migration. This procedure is nearly the same as in 9.5.5, Migrating the VDisk from managed mode to image mode on page 702. In this section, we describe how to migrate an image mode VDisk to another image mode VDisk. In our example, we migrate the VDisk W2k8_Log to another disk subsystem as an image mode VDisk. The second storage subsystem is a DS4500; a new LUN is configured on the storage and mapped to the SVC Cluster. The LUN is available in SVC as unmanaged disk11. Figure on page 705 shows Mdisk11

Chapter 9. Data migration

705

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-40 Unmanaged disk on a DS4500 storage subsystem

To migrate the image mode VDisk to another image mode VDisk, perform the following steps: 1. Check the VDisk to migrate and select Migrate to an image mode VDisk from the drop-down menu. Click Go.

Figure 9-41 Migrate to an image mode VDisk

2. The Introduction window appears Figure 9-42 on page 707. Click Next.

706

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-42 Migrate to an image mode VDisk

3. Select the VDisk source copy and click Next Figure 9-43.

Figure 9-43 Select copy

4. Select a target MDisk by clicking the radio button for it Figure 9-44 on page 708. Click Next.

Chapter 9. Data migration

707

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-44 Select Target MDisk

5. Select a target Managed Disk Group by clicking the radio button for it. Click Next.

Figure 9-45 Select MDisk Group

6. Select the number of threads (1 to 4) Figure 9-46. The higher the number, the higher the priority. Click Next.

Figure 9-46 Select the Threads

7. Verify the migration attributes Figure 9-47 on page 709 and click Finish.

708

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-47 Verify Migration Attributes

8. Check the progress window (Figure 9-48) and click Close.

Figure 9-48 Progress window

9. Repeat these steps for all image mode VDisks you want to migrate. 10.If you want to free the data from the SVC, use the procedure in 9.5.7, Free the data from the SVC on page 709.

9.5.7 Free the data from the SVC


If your data resides in an image mode VDisk inside the SVC, it is possible to free the data from the SVC. The sections listed below show how to migrate data to an image mode VDisk. Depending on your environment you might have to follow these procedures before freeing the data of the SVC: 9.5.5, Migrating the VDisk from managed mode to image mode on page 702 9.5.6, Migrating the VDisk from image mode to image mode on page 705 To free data from the SVC, we use the delete vdisk procedure.

Chapter 9. Data migration

709

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

If the command succeeds on an image mode VDisk, then the underlying back-end storage controller will be consistent with the data that a host could previously have read from the image mode VDisk, that is, all fast write data will have been flushed to the underlying LUN. Deleting an image mode VDisk causes the MDisk associated with the VDisk to be ejected from the MDG. The mode of the MDisk will be returned to Unmanaged. Note: This only applies to image mode VDisks. If you delete a normal VDisk, all data will also be deleted. As shown in Figure on page 699, the SAN disks currently reside on the SVC 2145 device. Check that you have installed supported device drivers on your host system. To switch back to the storage subsystem, perform the following steps: 1. Shut down your host system. 2. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN masking and add the host to the masking. 3. Open the Virtual Disk to Host mappings view in the SVC Console, mark your host, select Delete a Mapping, and click Go (Figure 9-49).

Figure 9-49 Delete a mapping

4. Confirm the task by clicking Delete (Figure 9-50).

Figure 9-50 Delete a mapping

5. The VDisk is removed from the SVC. 6. Repeat steps 3 and 4 for every disk you want to free from the SVC. 7. Power on your host system.

9.5.8 Put the disks online in Windows 2008 that have been freed from SVC
1. Using your DS4500 Storage Manager interface, now remap the two LUNs that were MDisks back to your W2K8 server.

710

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

2. Open your Computer Management window, Figure 9-51 shows the LUNs are now back to an IBM1814 type.

Figure 9-51 IBM1814 Type

3. Open your Disk Management window, you will see that the disks have appeared. You may need to reactivate your disk using your right click option on each disk. .

Figure 9-52 W2k8 Disk Management Chapter 9. Data migration

711

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

9.6 Migrating Linux SAN disks to SVC disks


In this section, we move the two LUNs from a Linux server that is currently booting directly off of our DS4000 storage subsystem over to the SVC. We then manage those LUNs with SVC, move them between other managed disks, and then finally move them back to image mode disks, so that those LUNs can then be masked/mapped back to the Linux server directly. Using this example will help you perform any one of the following activities in your environment: Move a Linux servers SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC. This would be the first activity that you would do when introducing SVC into your environment. This section shows that your host downtime is only a few minutes while you remap/remask disks using your storage subsystem LUN management tool. This step is detailed in 9.6.2, Prepare your SVC to virtualize disks on page 715. Move data between storage subsystems while your Linux server is still running and servicing your business application. You might perform this activity if you were removing a storage subsystem from your SAN environment, or wanting to move the data onto LUNs that are more appropriate for the type of data stored on those LUNs, taking availability, performance and redundancy into account. This step is covered in 9.6.4, Migrate the image mode VDisks to managed MDisks on page 722. Move your Linux servers LUNs back to image mode VDisks so that they can be remapped/remasked directly back to the Linux server. This step is detailed in 9.6.5, Preparing to migrate from the SVC on page 725. These three activities can be used individually, or together, enabling you to migrate your Linux servers LUNs from one storage subsystem to another storage subsystem using SVC as your migration tool. If you do not use all three activities, it will enable you to introduce or remove the SVC from your environment. The only downtime required for these activities will be the time it takes you to remask/remap the LUNs between the storage subsystems and your SVC.

712

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

In Figure 9-53, we show our Linux environment.

Zoning for migration scenarios LINUX Host

SAN

IBM or OEM Storage Subsystem

Green Zone

Figure 9-53 Linux SAN environment

Figure 9-53 shows our Linux server connected to our SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem: The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise Linux V5.1) and this LUN is used to boot the system directly from the storage subsystem. The operating system identifies it as /dev/mapper/VolGroup00-LogVol00. Note: To successfully boot a host off of the SAN, the LUN needs to have been assigned as SCSI LUN ID 0. This LUN is seen by Linux as our /dev/sda disk. We have also mapped a second disk (SCSI ID 1) to the host. It is 5 GB in size, and is mounted in the folder / data on disk /dev/dm-2 Example 9-2 shows our directly attached disks to the Linux hosts.
Example 9-2 Directly attached disks

[root@Palau data]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 10093752 /dev/sda1 101086 tmpfs 1033496 /dev/dm-2 5160576

Used Available Use% Mounted on 1971344 12054 0 158160 7601400 83813 1033496 4740272 21% 13% 0% 4% / /boot /dev/shm /data

Chapter 9. Data migration

713

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

[root@Palau data]# Our Linux server represents a typical SAN environment with a host directly using LUNs created on a SAN storage subsystem, as shown in Figure 9-53 on page 713: The Linux servers HBA cards are zoned so that they are in the Green zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, using LUN masking, are directly available to our Linux server.

9.6.1 Connecting the SVC to your SAN fabric


This section covers the basic steps that you would take to introduce the SVC into your SAN environment. While this section only summarizes these activities, you should be able to accomplish this without any downtime to any host or application that is also using your storage area network. If you have an SVC already connected, then you can safely go to 9.6.2, Prepare your SVC to virtualize disks on page 715. Connecting the SVC to your SAN fabric will require you to: Assemble your SVC components (nodes, UPS, and master console), cable it correctly, power it on, and verify that it is visible on your storage area network. This is covered in much greater detail in Chapter 3, Planning and configuration on page 63. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (our Black zone in Figure 9-54 on page 715). This zone should just contain all the ports (or WWN) for each of the SVC nodes in your cluster. Our SVC is made up of a two node cluster, where each node has four ports. So our Black zone has eight WWNs defined. A storage zone (our Red zone). This zone should also have all the ports/WWN from the SVC node zone as well as the ports/WWN for all the storage subsystems that SVC will virtualize. A host zone (our Blue zone). This zone should contain the ports/WWNs for each host that will access VDisk, together with the ports defined in the SVC node zone. Attention: Do not put your storage subsystems in the host (Blue) zone. This is an unsupported configuration and could lead to data loss! Our environment has been set up as described above and can be seen in Figure 9-54.

714

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Zoning per Migration Scenarios LINUX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

By Pinocchio 12-09-2005

Figure 9-54 SAN environment with SVC attached

9.6.2 Prepare your SVC to virtualize disks


This section covers the preparation tasks that we can perform before taking our Linux server offline. These are all non-disruptive activities and should not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Create a managed disk group


When we move the two Linux LUNs to the SVC, they will first be used in image mode, and as such, we need a managed disk group to hold those disks. First, we need to create an empty manage disk group for each of the disks, using the commands in Example 9-3. Our managed disk group will be called Palau-MDG0 and Palau-MDG1 and will hold our boot LUN and data LUN, respectively.
Example 9-3 Create empty mdiskgroup

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Data -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 6 Palau_SANB online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0

Chapter 9. Data migration

715

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

7 Palau_Data online 512 0 0.00MB 0 IBM_2145:ITSO-CLS1:admin>

0 0.00MB

0 0.00MB

0 0

Create your host definition


If your zone preparation has been performed correctly, the SVC should be able to see the Linux servers HBA adapters on the fabric (our host only had one HBA). The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC can see on the SAN fabric that has not yet been allocated to a host. Example 9-4 shows the output of the nodes it found on our SAN fabric. (If the port did not show up, it would indicate that we have a zone configuration problem.)
Example 9-4 Display HBA port candidates

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> If you do not know the WWN of your Linux server, you can look at which WWNs are currently configured on your storage subsystem for this host. Figure 9-55 shows our configured ports on an IBM DS4700 storage subsystem.

Figure 9-55 Display port WWNs

716

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

After verifying that the SVC can see our host (linux2), we create the host entry and assign the WWN to this entry. These commands can be seen in Example 9-5.
Example 9-5 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B054CAA:210000E08B89C1CD Host, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 4 state inactive WWPN 210000E08B054CAA node_logged_in_count 4 state inactive IBM_2145:ITSO-CLS1:admin>

Verify that we can see our storage subsystem


If our zoning has been performed correctly, the SVC should also be able to see the storage subsystem with the svcinfo lscontroller command (Example 9-6).
Example 9-6 Discover storage controller

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS1:admin> The storage subsystem can be renamed to something more meaningful (if we had many storage subsystems connected to our SAN fabric, then renaming them makes it considerably easier to identify them) with the svctask chcontroller -name command.

Get the disk serial numbers


To help avoid the possibility of creating the wrong VDisks from all the available, unmanaged MDisks (in case there are many seen by the SVC), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the right serial numbers before we create the image mode VDisks. If you are also using a DS4000 family storage subsystem, Storage Manager will provide the LUN serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are shown in Figure 9-56 on page 718 and Figure 9-57 on page 718.

Chapter 9. Data migration

717

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-56 Obtaining the disk serial number

Figure 9-57 Obtaining the disk serial number

718

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Before we move the LUNs to the SVC, we have to configure the host multipath configuration for the SVC. To do this, add the following entry to your multipath.conf file, as shown in Example 9-7, and add the content of Example 9-8 to the file.
Example 9-7 Edit the multipath.conf file

[root@Palau ~]# vi /etc/multipath.conf [root@Palau ~]# service multipathd stop Stopping multipathd daemon: [root@Palau ~]# service multipathd start Starting multipathd daemon: [root@Palau ~]#
Example 9-8 Data to add to file

[ [

OK OK

] ]

# SVC device { vendor "IBM" product "2145CF8" path_grouping_policy group_by_serial } We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as VDisks.

9.6.3 Move the LUNs to the SVC


In this step, we move the LUNs assigned to the Linux server and reassign them to the SVC. Our Linux server has two LUNs: One LUN is for our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at once will require the host to be shut down. If we only wanted to move the LUN that holds our application and data files, then we could do that without rebooting the host. The only requirement would be that we unmount the file system, and vary off the volume group to ensure data integrity between the re-assignment. As we intend to move both LUNs at the same time, these are the required steps: 1. Confirm that the multipath.conf file is configured for SVC. 2. Shut down the host. If you were just moving the LUNs that contained the application and data, then you could follow this procedure instead: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, then deactivate that volume group with the vgchange -a n VOLUMEGROUP_NAME. d. If you can, also unload your HBA driver using rmmod DRIVER_MODULE. This will remove the SCSI definitions from the kernel (we will reload this module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, these details are not provided here.

Chapter 9. Data migration

719

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

3. Using Storage Manager (our storage subsystem management tool), we can unmap/unmask the disks from the Linux server and remap/remask the disks to the SVC. Note: Even though we are using Boot from SAN, you can also map the boot disk with any LUN number to the SVC. It does not have to be 0. This is only important afterwards when we configure the mapping in the SVC to the host. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available MDisk number (starting from 0). Example 9-9 shows the commands we used to discover our MDisks and verify that we have the correct ones.
Example 9-9 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 mdisk26 online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 mdisk27 online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number you took earlier (in Figure 9-56 and Figure 9-57 on page 718). 5. Once we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk related tasks (Example 9-10).
Example 9-10 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 md_palauS online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

720

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

6. We create our image mode VDisks with the svctask mkvdisk command and the -vtype image option (Example 9-11). This command will virtualize the disks in the exact same layout as though they were not virtualized.
Example 9-11 Create the image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_SANB -iogrp 0 -vtype image -mdisk md_palauS -name palau_SANB Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Data -iogrp 0 -vtype image -mdisk md_palauD -name palau_Data Virtual Disk, id [30], successfully create IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count 29 palau_SANB 0 io_grp0 online 4 Palau_SANB 12.0GB image 60050768018301BF280000000000002B 0 1 30 palau_Data 0 io_grp0 online 4 Palau_Data 5.0GB image 60050768018301BF280000000000002C 0 1 7. Map the new image mode VDisks to the host (Example 9-12). Attention: Make sure that you map the boot VDisk with SCSI ID 0 to your host. The host must be able to identify the boot volume during the boot process.
Example 9-12 Map the VDisks to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANB Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 0 Palau 0 29 palau_SANB 210000E08B89C1CD 60050768018301BF280000000000002B 0 Palau 1 30 palau_Data 210000E08B89C1CD 60050768018301BF280000000000002C
Chapter 9. Data migration

721

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

IBM_2145:ITSO-CLS1:admin> Note: While the application is in a quiescent state, you could choose to FlashCopy the new image VDisks onto other VDisks. You do not need to wait until the FlashCopy has completed before starting your application. 8. Power on your host server and enter your FC HBA adapter BIOS before booting the OS, and make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Press Ctrl + Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0. e. Exit the menu and save your changes. 9. Boot up your Linux operating system. If you only moved the application LUN to the SVC and left your Linux server running, then you would need to follow these steps to see the new VDisk: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, then you can issue commands to the kernel to rescan the SCSI bus to see the new VDisks (these details are beyond the scope of this book). b. Check your syslog and verify that the kernel found the new VDisks. On Red Hat Enterprise Linux, the syslog is stored in /var/log/messages. c. If your application and data is on an LVM volume, rediscover the volume group, then run vgchange -a y VOLUME_GROUP to activate the volume group. 10.Mount your file systems with the mount /MOUNT_POINT command (Example 9-13). The df output shows us that all disks are available again.
Example 9-13 Mount data disk

[root@Palau data]# mount /dev/dm-2 /data [root@Palau data]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938056 7634688 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau data]# 11.You are now ready to start your application.

9.6.4 Migrate the image mode VDisks to managed MDisks


While the Linux server is still running, and our file systems are in use, we now migrate the image mode VDisks onto striped VDisks, with the extents being spread over the other three MDisks. In our example, the three new LUNs are located on an DS4500 storage subsystem, so we will also move to another storage subsystem in this example.

Preparing MDisks for striped mode VDisks


From our second storage subsystem, we have: 722
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Created and allocated three new LUNs to the SVC. Discovered them as MDisks. Renamed these LUNs to something more meaningful. Created a new MDisk group. Put all these MDisks into this group. You can see the output of our commands in Example 9-14.
Example 9-14 Create a new MDisk group

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512 MDisk Group, id [8], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 mdisk28 online unmanaged 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 mdisk29 online unmanaged 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 mdisk30 online unmanaged 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 27 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000

Chapter 9. Data migration

723

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

30 palau-md3 online managed MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

Migrate the VDisks


We are now ready to migrate the image mode VDisks onto striped VDisks in the MD_palauVD mdiskgroup with the svctask migratevdisk command (Example 9-15). While the migration is running, our Linux server is still running. To check the overall progress of the migration, we use the svcinfo lsmigrate command, as shown in Example 9-15. Listing the MDisk group with the svcinfo lsmdiskgrp command shows that the free capacity on the old MDisk groups is slowly increasing as those extents are moved to the new MDisk group.
Example 9-15 Migrating image mode VDisks to striped VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 25 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 70 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> Once this task has completed, Example 9-16 shows that the VDisks are now spread over three MDisks.
Example 9-16 Migration complete

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD id 8 name MD_palauVD status online mdisk_count 3 vdisk_count 2 capacity 24.0GB extent_size 512 free_capacity 7.0GB virtual_capacity 17.00GB used_capacity 17.00GB real_capacity 17.00GB overallocation 70 warning 0 724
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB id 28 29 30 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data id 28 29 30 IBM_2145:ITSO-CLS1:admin> Our migration to striped VDisks on another Storage Subsystem (DS4500) is now complete. The original MDisks (Palau-MDG0 and Palau-MD1) can now be removed from the SVC, and these LUNs removed from the storage subsystem. If these LUNs were the last used LUNs on our DS4700 storage subsystem, then we could remove it from our SAN fabric.

9.6.5 Preparing to migrate from the SVC


Before we move the Linux servers LUNs from being accessed by the SVC as virtual disks to become directly accessed from the storage subsystem, we need to convert the VDisks into image mode VDisks. You might want to perform this activity for any one of these reasons: You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no longer need that host connected to the SVC. You want to ship a host and its data that is currently connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC. There are also some other preparation activities that we can do before we have to shut down the host, and reconfigure the LUN masking/mapping. This section covers those activities. If you are moving the data to a new storage subsystem, it is assumed that storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment should look similar to ours, shown in Figure 9-58 on page 726.

Chapter 9. Data migration

725

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Zoning for migration scenarios LINUX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 9-58 Environment with SVC

Make fabric zone changes


The first step is to set up the SAN configuration so that all the zones are created. The new storage subsystem should be added to the Red zone so that the SVC can talk to it directly. We also need a Green zone for our host to use when we are ready for it to directly access the disk after it has been removed from the SVC. It is assumed that you have created the necessary zones. Once your zone configuration is set up correctly, the SVC should see the new storage subsystems controller using the svcinfo lscontroller command, as shown in Figure 9-10 on page 691. It is also a good idea to rename it to something more useful, which can be done with the svctask chcontroller -name command.

Create new LUNs


On our storage subsystem, we created two LUNs, and masked the LUNs so that the SVC can see them. These two LUNs will eventually be given directly to the host, removing the VDisks that it currently has. To check that the SVC can use them, issue the svctask detectmdisk command, as shown in Example 9-17.
Example 9-17 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name mdisk_grp_id mdisk_grp_name controller_name UID

status capacity

mode ctrl_LUN_#

726

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

0 mdisk0 online 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdisk31 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdisk32 online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

managed 8

Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to something more meaningful, just so that they do not get confused with other MDisks being used by other activities. Also, we create the MDisk groups to hold our new MDisks. This is shown in Example 9-18.
Example 9-18 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 MDisk Group, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 CMMVC5758E Object name already exists. IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 8 MD_palauVD online 3 2 24.0GB 512 7.0GB 17.00GB 17.00GB 17.00GB 70 0 9 MDG_Palauivd online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS1:admin>

Our SVC environment is now ready for the VDisk migration to image mode VDisks.

9.6.6 Migrate the VDisks to image mode VDisks


While our Linux server is still running, we migrate the managed VDisks onto the new MDisks using image mode VDisks. The command to perform this action is svctask migratetoimage and is shown in Example 9-19.
Example 9-19 Migrate the VDisks to image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_SANB -mdisk mdpalau_ivd -mdiskgrp MD_palauVD

Chapter 9. Data migration

727

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk mdpalau_ivd1 -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdpalau_ivd1 online image 8 MD_palauVD 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online image 8 MD_palauVD 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 4 migrate_source_vdisk_index 29 migrate_target_mdisk_index 32 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 30 migrate_source_vdisk_index 30 migrate_target_mdisk_index 31 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> During the migration, our Linux server will not be aware that its data is being physically moved between storage subsystems. Once the migration has completed, the image mode VDisks will be ready to be removed from the Linux server, and the real LUNs can be mapped/masked directly to the host using the storage subsystems tool.

9.6.7 Remove the LUNs from the SVC


The next step requires downtime on the Linux server, as we will remap/remask the disks so that the host sees them directly through the Green zone, as shown in Figure 9-58 on page 726. Our Linux server has two LUNs, one LUN being our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at once will require the host to be shut down.

728

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

If we only wanted to move the LUN that holds our application and data files, then we could do that without rebooting the host. The only requirement would be that we unmount the file system, and vary off the volume group to ensure data integrity during the reassignment. Before you start: Moving LUNs to another storage subsystem might need an additional entry in the multipath.conf file. Check with the storage subsystem vendor to see which content you have to add to the file. You might be able to install and modify it ahead of time. As we intend to move both LUNs at the same time, here are the required steps: 1. Confirm that your operating system is configured for the new storage. 2. Shut down the host. If you were just moving the LUNs that contained the application and data, then you could follow this procedure instead: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, then deactivate that volume group with the vgchange -a n VOLUMEGROUP_NAME command. d. If you can, unload your HBA driver using rmmod DRIVER_MODULE. This will remove the SCSI definitions from the kernel (we will reload this module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, these details are not provided here. 3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-20). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which should show that these disks are no longer mapped to the Linux server.
Example 9-20 Remove the VDisks from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau IBM_2145:ITSO-CLS1:admin> 4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make them unmanaged, as seen in Example 9-21 on page 730.

Chapter 9. Data migration

729

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cache data for the VDisk being removed. If there is still uncommitted cached data, then the command will fail with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk. The SVC will automatically de-stage uncommitted cached data two minutes after the last write activity for the VDisk. How much data there is to destage, and how busy the I/O subsystem is, will determine how long this command takes to complete. You can check if the VDisk has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Some modified data might exist in the cache. Some modified data might have existed in the cache, but any such data has been lost.

Example 9-21 Remove the VDisks from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 31 mdpalau_ivd1 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the disks from the SVC back to the Linux server. Attention: If one of the disks is used to boot your Linux server, than you need to make sure that it is presented back to the host as SCSI ID 0, so that the FC adapter BIOS finds it during its initialization. 6. Power on your host server and enter your FC HBA Adapter BIOS before booting the OS and make sure that you change the boot configuration so that it points to the SVC. In our example, we have performed the following steps on a QLogic HBA: a. Press Ctrl + Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from the SVC to your storage subsystem LUN with SCSI ID 0. 730
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

e. Exit the menu and save your changes. Important: This is the last step that you can perform and still safely back out everything you have done so far. Up to this point, you can reverse all the actions that you have performed so far to get the server back online without data loss, that is: Remap/remask the LUNs back to the SVC. Run the svctask detectmdisk to rediscover the MDisks. Recreate the VDisks with svctask mkvdisk. Remap the VDisks back to the server with svctask mkvdiskhostmap. Once you start the next step, you might not be able to turn back without the risk of data loss. We are now ready to restart the Linux server. If all the zoning and LUN masking/mapping was done successfully, our Linux server should boot as though nothing has happened. If you only moved the application LUN to the SVC, and left your Linux server running, then you would need to follow these steps to see the new VDisk: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, then you can issue commands to the kernel to rescan the SCSI bus to see the new VDisks (these details are beyond the scope of this book). b. Check your syslog and verify that the kernel found the new VDisks. On Red Hat Enterprise Linux, the syslog is stored in /var/log/messages. c. If your application and data is on an LVM volume, run vgscan to rediscover the volume group, then run the vgchange -a y VOLUME_GROUP to activate the volume group. 7. Mount your file systems with the mount /MOUNT_POINT command (Example 9-22). The df output shows us that all disks are available again.
Example 9-22 File system after migration

[root@Palau ~]# mount /dev/dm-2 /data [root@Palau ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938124 7634620 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau ~]# 8. You should be ready to start your application. And finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then will automatically be removed once the SVC determines that there are no VDisks associated with these MDisks.

Chapter 9. Data migration

731

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

9.7 Migrating ESX SAN disks to SVC disks


In this section we move the two LUNs from our VMware ESX server to the SVC. The ESX operating system itself is installed locally on the host, but two SAN disks are connected. The virtual machines stored there. We then manage those LUNs with the SVC, move them between other managed disks, and then finally move them back to image mode disks, so that those LUNs can then be masked/mapped back to the VMware ESX server directly. This example should help you perform any one of the following activities in your environment: Move your ESX servers data LUNs (that are your VMware vmfs file systems where you might have your virtual machines stored), which are directly accessed from a storage subsystem, to virtualized disks under the control of the SVC. Move LUNs between storage subsystems while your VMware virtual machines are still running. You might perform this activity to move the data onto LUNs that are more appropriate for the type of data stored on those LUNs, taking into account availability, performance and redundancy. This step is covered in 9.7.4, Migrate the image mode VDisks on page 741. Move your VMware ESX servers LUNs back to image mode VDisks so that they can be remapped/remasked directly back to the server. This step starts in 9.7.5, Preparing to migrate from the SVC on page 744. These activities can be used individually, or together, enabling you to migrate your VMware ESX servers LUNs from one storage subsystem to another storage subsystem, using SVC as your migration tool. If you do not use all three activities, it will enable you to introduce the SVC in your environment, or to move the data between your storage subsystems. The only downtime required for these activities will be the time it takes you to remask/remap the LUNs between the storage subsystems and your SVC. In Figure 9-59, we show our starting SAN environment.

732

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-59 ESX environment before migration

Figure 9-59 shows our ESX server connected to the SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem. Our ESX server represents a typical SAN environment with a host directly using LUNs created on a SAN storage subsystem, as shown in Figure 9-59: The ESX Servers HBA cards are zoned so that they are in the Green zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, and using LUN masking, are directly available to our ESX server.

9.7.1 Connecting the SVC to your SAN fabric


This section covers the basic steps to take to introduce the SVC into your SAN environment. While we only summarize these activities here, you should be able to accomplish this without any downtime to any host or application that is also using your storage area network. If you have an SVC already connected, then you can safely jump to the instructions given in 9.7.2, Prepare your SVC to virtualize disks on page 735.

Chapter 9. Data migration

733

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Be very careful connecting the SVC into your storage area network, as it will require you to connect cables to your SAN switches, and alter your switch zone configuration. Doing these activities incorrectly could render your SAN inoperable, so make sure you fully understand the impact of everything you are doing. Connecting the SVC to your SAN fabric will require you to: Assemble your SVC components (nodes, UPS, and master console), cable it correctly, power it on, and verify that it is visible on your storage area network. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (the Black zone in our picture on Example 9-45 on page 756). This zone should just contain all the ports (or WWN) for each of the SVC nodes in your cluster. Our SVC is made up of a two node cluster where each node has four ports. So our Black zone has eight WWNs defined. A storage zone (our Red zone). This zone should also have all the ports/WWN from the SVC node zone as well as the ports/WWN for all the storage subsystems that SVC will virtualize. A host zone (our Blue zone). This zone should contain the ports/WWNs for each host that will access VDisks, together with the ports defined in the SVC node zone. Attention: Do not put your storage subsystems in the host (Blue) zone. This is an unsupported configuration and could lead to data loss! Our environment has been set up as described above and can be seen in Figure 9-60.

Figure 9-60 SAN environment with SVC attached

734

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

9.7.2 Prepare your SVC to virtualize disks


This section covers the preparatory tasks we perform before taking our ESX server or virtual machines offline. These are all non-disruptive activities, and should not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Create a managed disk group


When we move the two ESX LUNs to the SVC, they will first be used in image mode, and as such, we need a managed disk group to hold those disks. First, we need to create an empty managed disk group for each of the disks, using the commands in Example 9-23. Our managed disk group will be called ESX-BOOT-MDG and ESX-DATA-MDG to hold our boot LUN and data LUN, respectively.
Example 9-23 Create empty MDisk group

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512 MDisk Group, id [3], successfully created

Create the host definition


If your zone preparation above has been performed correctly, the SVC should be able to see the ESX servers HBA adapters on the fabric (our host only had one HBA). First, we get the WWN for our ESX servers HBA, as we have many hosts connected to our SAN fabric and in the Blue zone. We want to make sure we have the correct WWN to reduce our ESX servers downtime. Log into your VMware management console as root, navigate to Configuration, and then select Storage Adapter. The Storage Adapters are shown to the right of this window and display all the necessary information. Figure 9-61 shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.

Figure 9-61 Obtain your WWN using the VMware Management Console

The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC can see on the SAN fabric that have not yet been allocated to a host. Example 9-24 shows the output of the nodes it found on our SAN fabric. (If the port did not show up, it would indicate that we have a zone configuration problem.) 735

Chapter 9. Data migration

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 9-24 Add the host to the SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89B8C0 210000E08B892BCD 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> After verifying that the SVC can see our host, we create the host entry and assign the WWN to this entry. These commands can be seen in Example 9-25.
Example 9-25 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn 210000E08B89B8C0:210000E08B892BCD Host, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active IBM_2145:ITSO-CLS1:admin>

Verify that you can see your storage subsystem


If our zoning has been performed correctly, the SVC should also be able to see the storage subsystem with the svcinfo lscontroller command (Example 9-26).
Example 9-26 Available storage controllers

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n product_id_low product_id_high 0 DS4500 1742-900 1 DS4700 1814 FAStT

vendor_id IBM IBM

Get your disk serial numbers


To help avoid the possibility of creating the wrong VDisks from all the available unmanaged MDisks (in case there are many seen by the SVC), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the right serial numbers before we create the image mode VDisks. 736
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

If you are also using a DS4000 family storage subsystem, Storage Manager will provide the LUN serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are shown in Figure 9-63 on page 737 and Figure 9-62 on page 737.

Figure 9-62 Obtaining the disk serial number

Figure 9-63 Obtaining the disk serial number

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as VDisks.

Chapter 9. Data migration

737

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

9.7.3 Move the LUNs to the SVC


In this step, we move the LUNs assigned to the ESX server and reassign them to the SVC. Our ESX server has two LUNs, as shown in Figure 9-64.

Figure 9-64 VMWare LUNs

The virtual machines are located on these LUNs. So, in order to move this LUN under the control of the SVC, we do not need to reboot the whole ESX server, but we have to stop/suspend all VMware guests that are using this LUN.

Move VMware guest LUNs


To move the VMware LUNs to the SVC, perform the following steps: 1. Using Storage Manager, we have identified the LUN number that has been presented to the ESX Server. Make sure to remember which LUN had which LUN number (Figure 9-65).

Figure 9-65 Identify LUN numbers in IBM DS4000 Storage Manager

2. Next, identify all the VMware guests that are using this LUN and shut them down. One way to identify them is to highlight the virtual machine and open the Summary Tab. The datapool used is displayed under Datastore. Figure 9-66 on page 739 shows a Linux virtual machine using the datastore named SLES_Costa_Rica.

738

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-66 Identify LUNs used by virtual machines

3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is no guest operating system that is running and using this datastore. 4. Repeat steps 1 to 3 for every datastore you want to migrate. 5. Once the guests are suspended, we use Storage Manager (our storage subsystem management tool) to unmap/unmask the disks from the ESX server and remap/remask the disks to the SVC. 6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named as mdiskN, where N is the next available MDisk number (starting from 0). Example 9-27 shows the commands we used to discover our MDisks and verify that we have the correct ones.
Example 9-27 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 mdisk21 online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 mdisk22 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

Chapter 9. Data migration

739

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number you obtained earlier (in Figure 9-62 and Figure 9-63 on page 737). 7. Once we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk related tasks (Example 9-28).
Example 9-28 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 21 ESX_SLES online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 8. We create our image mode VDisks with the svctask mkvdisk command (Example 9-29). The parameter -vtype image makes sure that it will create image mode VDisks, which means the virtualized disks will have the exact same layout as though they were not virtualized.
Example 9-29 Create the image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVD Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_SLES -name ESX_SLES_IVD Virtual Disk, id [30], successfully created IBM_2145:ITSO-CLS1:admin> 9. Finally, we can map the new image mode VDisks to the host. Use the same SCSI LUNs ID as on the storage subsystem for the mapping (Example 9-30).
Example 9-30 Map the VDisks to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029

740

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

10.Now, using the VMware management console, rescan to discover the new VDisk. Open the configuration tab, select Storage Adapters, and click Rescan. During the rescan, you might receive geometry errors, as ESX discovers that the old disk has disappeared. Your VDisk will appear with new vmhba devices. 11.We are now ready to restart the VMware guests again. The VMware LUNs have now been successfully migrated to the SVC.

9.7.4 Migrate the image mode VDisks


While the VMware server and its virtual machines are still running, we now migrate the image mode VDisks onto striped VDisks, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode VDisks


In this example, we migrate the image mode VDisks to VDisks and we move the data to another storage subsystem in one step.

Adding a new storage subsystem to SVC


If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment should look similar to ours, as shown in Figure 9-67.

Figure 9-67 ESX SVC SAN environment

Make fabric zone changes


The first step is to set up the SAN configuration so that all the zones are created. The new storage subsystem should be added to the Red zone so that the SVC can talk to it directly.

Chapter 9. Data migration

741

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

We also need a Green zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. We assume that you have created the necessary zones. In our environment, we have: Created three LUNs on another storage subsystem and mapped it to the SVC. Discovered them as MDisks. Created a new MDisk group. Renamed these LUNs to something more meaningful. Put all these MDisks into this group. You can see the output of our commands in Example 9-31. Example 9-31 Create a new MDisk group IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 mdisk23 online unmanaged 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 mdisk24 online unmanaged 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 mdisk25 online unmanaged 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 742
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

Migrate the VDisks


We are now ready to migrate the image mode VDisks onto striped VDisks in the new managed disk group (MDG_ESX_VD) with the svctask migratevdisk command (Example 9-32). While the migration is running, our VMware ESX server will remain running, as will our VMware guests. To check the overall progress of the migration, we use the svcinfo lsmigrate command, as shown in Example 9-32. Listing the MDisk group with the svcinfo lsmdiskgrp command shows that the free capacity on the old MDisk group is slowly increasing as those extents are moved to the new MDisk group.
Example 9-32 Migrating image mode VDisks to striped VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 1 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp

Chapter 9. Data migration

743

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

id name capacity extent_size real_capacity overallocation 3 MDG_Nile_VM 130.0GB 512 130.00GB 100 4 MDG_ESX_VD 165.0GB 512 0.00MB 0 IBM_2145:ITSO-CLS1:admin>

status free_capacity warning online 1.0GB 0 online 35.0GB 0

mdisk_count vdisk_count virtual_capacity used_capacity 2 130.00GB 3 0.00MB 2 130.00GB 0 0.00MB

If you compare the svcinfo lsmdiskgrp output after the migration, as shown in Example 9-33, you can see that all the virtual capacity has now been moved from the old MDisk group (MDG_Nile_VM) to the new MDisk group (MDG_ESX_VD). The mdisk_count column shows that the capacity is now spread over three MDisks.
Example 9-33 List MDisk group

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status capacity extent_size free_capacity real_capacity overallocation warning 3 MDG_Nile_VM online 130.0GB 512 130.0GB 0.00MB 0 0 4 MDG_ESX_VD online 165.0GB 512 35.0GB 130.00GB 78 0 IBM_2145:ITSO-CLS1:admin>

mdisk_count vdisk_count virtual_capacity used_capacity 2 0.00MB 3 130.00GB 0 0.00MB 2 130.00GB

Our migration to the SVC is now complete. The original MDisks can now be removed from the SVC, and these LUNs removed from the storage subsystem. If these LUNs were the last used LUNs on our storage subsystem, then we could remove them from our SAN fabric.

9.7.5 Preparing to migrate from the SVC


Before we move the ESX servers LUNs from being accessible by the SVC as virtual disks to becoming directly accessed from the storage subsystem, we need to convert the VDisks into image mode VDisks. You might want to perform this activity for any one of these reasons: You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no longer need that host connected to the SVC. You want to ship a host and its data that currently is connected to the SVC, to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC.

744

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

There are also some other preparatory activities that we can do before we need to shut down the host and reconfigure the LUN masking/mapping. This section covers those activities. In our example, we will move VDisks located on a DS4500 to image mode VDisks located on a DS4700. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment should look similar to ours, as described in Adding a new storage subsystem to SVC on page 741 and Make fabric zone changes on page 741.

Create new LUNs


On our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see them. These two LUNs will eventually be given directly to the host, removing the VDisks that it currently has. To check that the SVC can use them, issue the svctask detectmdisk command, as shown in Example 9-34.
Example 9-34 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 mdisk26 online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 mdisk27 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 4

Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to something more meaningful, just so that they do not get confused with other MDisks being used by other activities. Also, we create the MDisk groups to hold our new MDisks. This is all shown in Example 9-35.
Example 9-35 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512 MDisk Group, id [5], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning

Chapter 9. Data migration

745

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

4 MDG_ESX_VD online 3 165.0GB 512 35.0GB 130.00GB 130.00GB 78 0 5 MDG_IVD_ESX online 0 512 0 0.00MB 0.00MB 0 IBM_2145:ITSO-CLS1:admin>

2 130.00GB 0 0.00MB 0 0

Our SVC environment is now ready for the VDisk migration to image mode VDisks.

9.7.6 Migrate the managed VDisks to image mode VDisks


While our ESX server is still running, we migrate the managed VDisks onto the new MDisks using image mode VDisks. The command to perform this action is svctask migratetoimage, and is shown in Example 9-36.
Example 9-36 Migrate the VDisks to image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 ESX_IVD_SLES online image 5 MDG_IVD_ESX 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online image 5 MDG_IVD_ESX 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

During the migration, our ESX server will not be aware that its data is being physically moved between storage subsystems. We can continue to run and use the virtual machines running on the server. You can check the migration status with the command svcinfo lsmigrate, as shown in Example 9-37.
Example 9-37 The svcinfo lsmigrate command and output

IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 2 746


SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

migrate_source_vdisk_index 29 migrate_target_mdisk_index 27 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 12 migrate_source_vdisk_index 30 migrate_target_mdisk_index 26 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> Once the migration has completed, the image mode VDisks will be ready to be removed from the ESX server, and the real LUNs can be mapped/masked directly to the host using the storage subsystems tool.

9.7.7 Remove the LUNs from the SVC


Your ESX servers configuration determines in what order your LUNs are removed from the control of the SVC, and whether you need to reboot the ESX server as well as suspending the VMware guests. In our example, we have moved the virtual machine disks, so in order to remove these LUNs from the control of the SVC, we have to stop/suspend all VMware guests that are using this LUN. Perform the following steps: 1. Check which SCSI LUN IDs are assigned to the migrated disks. This can be achieved with the svcinfo lshostvdiskmap command, as shown in Example 9-38. Compare the VDisk UID and sort out the information.
Example 9-38 Note SCSI LUN IDs

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id wwpn vdisk_UID 1 Nile 0 30 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 210000E08B892BCD 60050768018301BF2800000000000029 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id mdisk_grp_id mdisk_grp_name capacity FC_name RC_id RC_name copy_count 0 vdisk_A 0 2 MDG_Image 36.0GB 29 ESX_W2k3_IVD 0 4 MDG_ESX_VD 70.0GB striped 60050768018301BF2800000000000029 0 30 ESX_SLES_IVD 0 4 MDG_ESX_VD 60.0GB striped 60050768018301BF280000000000002A 0

vdisk_name ESX_SLES_IVD ESX_W2k3_IVD

IO_group_name status type FC_id vdisk_UID fc_map_count io_grp0 image io_grp0 1 io_grp0 1 online online online

Chapter 9. Data migration

747

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

IBM_2145:ITSO-CLS1:admin> 2. Shut down/suspend all our guests using the LUNs. You can use the same method used in Move VMware guest LUNs on page 738 to identify the guests using this LUN. 3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-39). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which should show that these VDisks are no longer mapped to the ESX server.
Example 9-39 Remove the VDisks from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD 4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make the MDisks unmanaged, as shown in Example 9-40. Note: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cache data for the VDisk being removed. If there is still uncommitted cached data, then the command will fail with the error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk. The SVC will automatically de-stage uncommitted cached data two minutes after the last write activity for the VDisk. Depending on how much data there is to destage, and how busy the I/O subsystem is will determine how long this command takes to complete. You can check if the VDisk has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Some modified data might exist in the cache. Some modified data might have existed in the cache, but any such data has been lost.

Example 9-40 Remove the VDisks from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 ESX_IVD_SLES online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000

748

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the disks from the SVC back to the ESX server. Remember in Example 9-38 on page 747 that we have noted the SCSI LUNs ID. To map your LUNs on the storage subsystem, use the same SCSI LUN IDs as before in the SVC. Important: This is the last step that you can perform, and still safely back out of everything you have done so far. Up to this point, you can reverse all the actions that you have performed so far to get the server back online without data loss, that is: Remap/remask the LUNs back to the SVC. Run svctask detectmdisk to rediscover the MDisks. Recreate the VDisks with svctask mkvdisk. Remap the VDisks back to the server with svctask mkvdiskhostmap. Once you start the next step, you might not be able to turn back without the risk of data loss. 6. Now, using the VMware management console, rescan to discover the new VDisk. Figure 9-68 shows the view before the rescan. Figure 9-69 on page 750 shows the view after the rescan. Note that the size of the LUN has changed because we have moved to another LUN on another storage subsystem.

Figure 9-68 Before adapter rescan

Chapter 9. Data migration

749

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-69 After adapter rescan

During the rescan, you might receive geometry errors as ESX discovers that the old disk has disappeared. Your VDisk will appear with a new vmhba address, and VMware will recognize it as our VMWARE-GUESTS disk. 7. We are now ready to restart the VMware guests. 8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then will automatically be removed, once the SVC determines that there are no VDisks associated with these MDisks.

9.8 Migrating AIX SAN disks to SVC disks


In this section, we move the two LUNs from an AIX server, which is directly off of our DS4000 storage subsystem, over to the SVC. We then manage those LUNs with the SVC, move them between other managed disks, and then finally move them back to image mode disks, so that those LUNs can then be masked/mapped back to the AIX server directly. If you use this example, this should help you perform any one of the following activities in your environment: Move an AIX servers SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC. This would be the first activity that you would do when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap/remask disks using your storage subsystem LUN management tool. This step starts in 9.8.2, Prepare your SVC to virtualize disks on page 753. Move data between storage subsystems while your AIX server is still running and servicing your business application. You might perform this activity if you were removing a storage subsystem from your SAN environment and want to move the data onto LUNs that are more appropriate for the type of data stored on those LUNs, taking into account

750

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

availability, performance, and redundancy. This step is covered in 9.8.4, Migrate image mode VDisks to VDisks on page 760. Move your AIX servers LUNs back to image mode VDisks, so that they can be remapped/remasked directly back to the AIX server. This step starts in 9.8.5, Preparing to migrate from the SVC on page 762. These three activities can be used individually or together, enabling you to migrate your AIX servers LUNs from one storage subsystem to another storage subsystem, using the SVC as your migration tool. If you do not use all three activities, it will enable you to introduce or remove the SVC from your environment. The only downtime required for these activities will be the time it takes you to remask/remap the LUNs between the storage subsystems and your SVC. We show our AIX environment in Figure 9-70.

Zoning for migration scenarios AIX Host

SAN

IBM or OEM Storage Subsystem

Green Zone

Figure 9-70 AIX SAN environment

Figure 9-70 shows our AIX server connected to our SAN infrastructure. It has two LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem. The disk, hdisk3, makes up the LVM group itsoaixvg, and disk hdisk4 makes up the LVM group itsoaixvg1, as shown in Example 9-41 on page 752.

Chapter 9. Data migration

751

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 9-41 AIX SAN configuration

#lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #

-Cc disk Available Available Available Available Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1D-08-02 1D-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1814 DS4700 Disk Array Device 1814 DS4700 Disk Array Device rootvg rootvg rootvg itsoaixvg itsoaixvg1 active active active active active

0009cddaea97bf61 0009cdda43c9dfd5 0009cddabaef1d99 0009cdda0a4c0dd5 0009cdda0a4d1a64

Our AIX server represents a typical SAN environment with a host directly using LUNs created on a SAN storage subsystem, as shown in Figure 9-70 on page 751. The AIX servers HBA cards are zoned so that they are in the Green (Dotted line) zone, with our storage subsystem. The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem, and using LUN masking, are directly available to our AIX server.

9.8.1 Connecting the SVC to your SAN fabric


This section covers the basic steps that you would take to introduce the SVC into your SAN environment. While this section only summarizes these activities, you should be able to accomplish this without any downtime to any host or application that is also using your storage area network. If you have an SVC already connected, then you can go to 9.8.2, Prepare your SVC to virtualize disks on page 753. Be very careful, as connecting the SVC into your storage area network will require you to connect cables to your SAN switches and alter your switch zone configuration. Doing these activities incorrectly could render your SAN inoperable, so make sure you fully understand the impact of everything you are doing. Connecting the SVC to your SAN fabric will require you to: Assemble your SVC components (nodes, UPS, and master console), cable it correctly, power it on, and verify that it is visible on your storage area network. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (our Black zone in Example 9-54 on page 761). This zone should just contain all the ports (or WWN) for each of the SVC nodes in your cluster. Our SVC is made up of a two node cluster, where each node has four ports. So our Black zone has eight WWNs defined. A storage zone (our Red zone). This zone should also have all the ports/WWN from the SVC node zone as well as the ports/WWN for all the storage subsystems that SVC will virtualize. A host zone (our Blue zone). This zone should contain the ports/WWNs for each host that will access the VDisk, together with the ports defined in the SVC node zone.

752

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Attention: Do not put your storage subsystems in the host (Blue) zone. This is an unsupported configuration and could lead to data loss! Our environment has been set up as described above and can be seen in Figure 9-71.

Zoning for migration scenarios AIX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 9-71 SAN environment with SVC attached

9.8.2 Prepare your SVC to virtualize disks


This section covers the preparatory tasks that we perform before taking our AIX server offline. These are all non-disruptive activities and should not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Create a managed disk group


When we move the two AIX LUNs to the SVC, they will first be used in image mode, and as such we need a managed disk group to hold those disks. First, we need to create an empty managed disk group for each of the disks, using the commands in Example 9-42 on page 754. Our managed disk group to hold our LUNs will be called KANAGA_MDG_0 and KANAGA_MDG_1.

Chapter 9. Data migration

753

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 9-42 Create empty mdiskgroup

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 7 aix_imgmdg online 512 0 0.00MB 0 IBM_2145:ITSO-CLS2:admin> 0 0.00MB 0 0.00MB 0 0

Create our host definition


If your zone preparation above has been performed correctly, the SVC should be able to see the AIX servers HBA adapters on the fabric (our host only had one HBA). First, we get the WWN for our AIX servers HBA, as we have many hosts connected to our SAN fabric and in the Blue zone. We want to make sure we have the correct WWN to reduce our AIX servers downtime. Example 9-43 shows the commands to get the WWN; our host has a WWN of 10000000C932A7FB.
Example 9-43 Discover your WWN

#lsdev -Ccadapter|grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter #lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC

754

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 #lscfg -vpl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A67B Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A800 ROS Level and ID............02C03891 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........02000909 Device Specific.(Z4)........FF401050 Device Specific.(Z5)........02C03891 Device Specific.(Z6)........06433891 Device Specific.(Z7)........07433891 Device Specific.(Z8)........20000000C932A800 Device Specific.(Z9)........CS3.82A1 Device Specific.(ZA)........C1D3.82A1 Device Specific.(ZB)........C2D3.82A1 Device Specific.(YL)........U0.1-P2-I5/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9000 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 ## The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC can see on the SAN fabric that have not yet been allocated to a host. Example 9-44 shows the output of the nodes it found in our SAN fabric. (If the port did not show up, it would indicate that we have a zone configuration problem.)
Example 9-44 Add the host to the SVC

IBM_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidate id 10000000C932A7FB 10000000C932A800 210000E08B89B8C0 IBM_2145:ITSO-CLS2:admin>

Chapter 9. Data migration

755

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

After verifying that the SVC can see our host (Kanaga), we create the host entry and assign the WWN to this entry. These commands can be seen in Example 9-45.
Example 9-45 Create the host entry

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn 10000000C932A7FB:10000000C932A800 Host, id [5], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga id 5 name Kanaga port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C932A800 node_logged_in_count 2 state inactive WWPN 10000000C932A7FB node_logged_in_count 2 state inactive IBM_2145:ITSO-CLS2:admin>

Verify that we can see our storage subsystem


If our zoning has been performed correctly, the SVC should also be able to see the storage subsystem with the svcinfo lscontroller command (Example 9-46).
Example 9-46 Discover the storage controller

IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 IBM_2145:ITSO-CLS2:admin> Note: The svctask chcontroller command enables you to change the discovered storage subsystem name in SVC. In complex SANs, it might be a good idea to rename your storage subsystem to something more meaningful.

Get the disk serial numbers


To help avoid the possibility of creating the wrong VDisks from all the available unmanaged MDisks (in case there are many seen by the SVC), we obtain the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode VDisks. If you are also using a DS4000 family storage subsystem, Storage Manager will provide the LUN serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are shown in Figure 9-72 on page 757 and Figure 9-73 on page 757.

756

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-72 Obtaining disk serial number

Figure 9-73 Obtaining disk serial number

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks and give them back to the host as VDisks.

Chapter 9. Data migration

757

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

9.8.3 Move the LUNs to the SVC


In this step, we move the LUNs assigned to the AIX server and reassign them to the SVC. As we only wanted to move the LUN that holds our application and data files, we can do that without rebooting the host. The only requirement would be that we unmount the file system, and vary off the volume group to ensure data integrity after the reassignment. Before you start: Moving LUNs to the SVC will require that the SDD device driver is installed on the AIX server. This could also be installed ahead of time; however, it might require an outage of your host to do so. As we intend to move both LUNs at the same time, here are the required steps: 1. Confirm that the SDD device driver is installed. 2. Unmount and vary off the volume groups: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, then deactivate that volume group with the varyoffvg VOLUMEGROUP_NAME. Example 9-47 shows our commands that we ran on Kanaga.
Example 9-47 AIX command sequence

#varyoffvg itsoaixvg #varyoffvg itsoaixvg1 #lsvg rootvg itsoaixvg itsoaixvg1 #lsvg -o rootvg 3. Using Storage Manager (our storage subsystem management tool), we can unmap/unmask the disks from the AIX server and remap/remask the disks to the SVC. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available mdisk number (starting from 0). Example 9-48 shows the commands we used to discover our MDisks and verify that we have the correct ones.
Example 9-48 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name mdisk_grp_id mdisk_grp_name controller_name UID

status capacity

mode ctrl_LUN_#

24 mdisk24 online unmanaged 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 mdisk25 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000

758

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

IBM_2145:ITSO-CLS2:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number you discovered earlier (in Figure 9-72 and Figure 9-73 on page 757). 5. Once we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk related tasks (Example 9-49).
Example 9-49 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> 6. We create our image mode VDisks with the svctask mkvdisk command and the option -vtype image (Example 9-50). This command will virtualize the disks in the exact same layout as though they were not virtualized.
Example 9-50 Create the image mode VDisks

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX -name IVD_Kanaga Virtual Disk, id [8], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1 Virtual Disk, id [9], successfully created IBM_2145:ITSO-CLS2:admin> 7. Finally, we can map the new image mode VDisks to the host (Example 9-51).
Example 9-51 Map the VDisks to the host

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS2:admin> Note: While the application is in a quiescent state, you could choose to FlashCopy the new image VDisks onto other VDisks. You do not need to wait until the FlashCopy has completed before starting your application. Now we are ready to perform the following steps to put the image mode VDisks online: 1. Remove the old disk definitions, if you have not done so already. 2. Run cfgmgr -vs to rediscover the available LUNs.
Chapter 9. Data migration

759

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

3. If your application and data is on an LVM volume, rediscover the volume group, then run the varyonvg VOLUME_GROUP to activate the volume group. 4. Mount your file systems with the mount /MOUNT_POINT command. 5. You should be ready to start your application.

9.8.4 Migrate image mode VDisks to VDisks


While the AIX server still running, and our file systems are in use, we now migrate the image mode VDisks onto striped VDisks, with the extents being spread over three other MDisks.

Preparing MDisks for stripped mode VDisks


From our storage subsystem, we have: Created and allocated three LUNs to the SVC. Discovered them as MDisks. Renamed these LUNs to something more meaningful. Created a new MDisk group. Put all these MDisks into this group. You can see the output of our commands in Example 9-52.
Example 9-52 Create a new MDisk group

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512 IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 mdisk26 online unmanaged 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 mdisk27 online unmanaged 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 mdisk28 online unmanaged 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28 IBM_2145:ITSO-CLS2:admin> IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID

760

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

24 Kanaga_AIX online image aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>

Migrate the VDisks


We are now ready to migrate the image mode VDisks onto striped VDisks with the svctask migratevdisk command (Example 9-15 on page 724). While the migration is running, our AIX server is still running, and we can continue accessing the files. To check the overall progress of the migration, we use the svcinfo lsmigrate command, as shown in Example 9-53. Listing the MDisk group with the svcinfo lsmdiskgrp command shows that the that the free capacity on the old MDisk group is slowly increasing as those extents are moved to the new MDisk group.
Example 9-53 Migrating image mode VDisks to striped VDisks

IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 10 migrate_source_vdisk_index 8 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 9 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin> Once this task has completed, Example 9-54 shows that the VDisks are now spread over three MDisks in the managed disk group aix_vd. The old mdiskgrp is empty now.
Example 9-54 Migration complete

IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd id 6


Chapter 9. Data migration

761

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

name aix_vd status online mdisk_count 3 vdisk_count 2 capacity 18.0GB extent_size 512 free_capacity 5.0GB virtual_capacity 13.00GB used_capacity 13.00GB real_capacity 13.00GB overallocation 72 warning 0 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg id 7 name aix_imgmdg status online mdisk_count 2 vdisk_count 0 capacity 13.0GB extent_size 512 free_capacity 13.0GB virtual_capacity 0.00MB used_capacity 0.00MB real_capacity 0.00MB overallocation 0 warning 0 IBM_2145:ITSO-CLS2:admin> Our migration to the SVC is now complete. The original MDisks can now be removed from the SVC, and these LUNs removed from the storage subsystem. If these LUNs were the last used LUNs on our storage subsystem, then we could remove it from our SAN fabric.

9.8.5 Preparing to migrate from the SVC


Before we change the AIX servers LUNs from being accessed by the SVC as virtual disks to being directly accessed from the storage subsystem, we need to convert the VDisks into image mode VDisks. You might want to perform this activity for any one of these reasons: You purchased a new storage subsystem, and you were using the SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk and you no longer need that host connected to the SVC. You want to ship a host and its data that is currently connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC. There are also some other preparatory activities that we can do before we need to shut down the host and reconfigure the LUN masking/mapping. This section covers those activities.

762

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment should look similar to ours, as shown in Figure 9-74.

Zoning for migration scenarios AIX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 9-74 Environment with SVC

Make fabric zone changes


The first step is to set up the SAN configuration so that all the zones are created. The new storage subsystem should be added to the Red zone so that the SVC can talk to it directly. We also need a Green zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. It is assumed that you have created the necessary zones. Once your zone configuration is set up correctly, the SVC should see the new storage subsystems controller using the svcinfo lscontroller command, as shown in Example 9-55 on page 764. It is also a good idea to rename it to something more meaningful. This can be performed with the svctask chcontroller -name command.

Chapter 9. Data migration

763

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Example 9-55 Discovering the new storage subsystem

IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS2:admin>

Create new LUNs


On our storage subsystem, we created two LUNs and masked them so that the SVC can see them. These two LUNs will eventually be given directly to the host, removing the VDisks that it currently has. To check that the SVC can use them, issue the svctask detectmdisk command, as shown in Example 9-56. In our example, we will use the two 10 GB LUNs located on the DS4500 subsystem, so in this step we migrate back to image mode VDisks and to another subsystem in one step. We have already deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they appear offline here.
Example 9-56 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 mdisk29 online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 mdisk30 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to something more meaningful just so that they do not get confused with other MDisks being used by other activities. Also, we create the MDisk groups to hold our new MDisks. This is shown in Example 9-57 on page 764.
Example 9-57 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30 764
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512 MDisk Group, id [3], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 3 KANAGA_AIXMIG online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 6 aix_vd online 3 2 18.0GB 512 5.0GB 13.00GB 13.00GB 13.00GB 72 0 7 aix_imgmdg offline 2 0 13.0GB 512 13.0GB 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS2:admin>

Our SVC environment is now ready for the VDisk migration to image mode VDisks.

9.8.6 Migrate the managed VDisks


While our AIX server is still running, we migrate the managed VDisks onto the new MDisks using image mode VDisks. The command to perform this action is svctask migratetoimage and is shown in Example 9-58.
Example 9-58 Migrate the VDisks to image mode VDisks

IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 AIX_MIG online image 3 KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000

Chapter 9. Data migration

765

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

30 AIX_MIG1 online image KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 9 migrate_target_mdisk_index 30 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 8 migrate_target_mdisk_index 29 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin>

During the migration, our AIX server will not be aware that its data is being physically moved between storage subsystems. Once the migration has completed, the image mode VDisks will be ready to be removed from the AIX server, and the real LUNs can be mapped/masked directly to the host using the storage subsystems tool.

9.8.7 Remove the LUNs from the SVC


The next step will require downtime, as we remap/remask the disks so that the host sees them directly through the Green zone. As our LUNs only hold data files, and a unique volume group is used, we could do that without rebooting the host. The only requirement is that we unmount the file system and vary off the volume group to ensure data integrity after the reassignment. Before you start: Moving LUNs to another storage system might need a different driver than SDD. Check with the storage subsystems vendor to see what driver you will need. You might be able to install this driver ahead of time. Here are the required steps to remove the SVC: 1. Confirm that the correct device driver for the new storage subsystem is loaded. As we are moving to a DS4500, we can continue to use the SDD device driver. 2. Shut down any applications and unmount the file systems: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, then deactivate that volume group with the varyoffvg VOLUMEGROUP_NAME command.

766

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-59). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which should show that these disks are no longer mapped to the AIX server.
Example 9-59 Remove the VDisks from the host

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga IBM_2145:ITSO-CLS2:admin> 4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make the MDisks unmanaged, as shown in Example 9-60. Note: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cache data for the VDisk being removed. If there is still uncommitted cached data, then the command will fail with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the VDisk. How much data there is to destage and how busy the I/O subsystem is will determine how long this command takes to complete. You can check if the VDisk has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Some modified data might exist in the cache. Some modified data might have existed in the cache, but any such data has been lost.

Example 9-60 Remove the VDisks from the SVC

IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 29 AIX_MIG online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>

Chapter 9. Data migration

767

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the disks from the SVC back to the AIX server. Important: This is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all the actions that you have performed so far to get the server back online without data loss: Remap/remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the VDisks with svctask mkvdisk. Remap the VDisks back to the server with svctask mkvdiskhostmap. Once you start the next step, you might not be able to turn back without the risk of data loss. We are now ready to access the LUNs from the AIX server. If all the zoning and LUN masking/mapping was done successfully, our AIX server should boot as though nothing happened. 1. Run cfgmgr -S to discover the storage subsystem. 2. Use lsdev -Ccdisk to verify that you discovered your new disk. 3. Remove the references to all the old disks. Example 9-61 shows the removal using SDD and Example 9-62 on page 769 shows the removal using SDDPCM.
Example 9-61 Remove references to old paths using SDD

#lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk5 Defined 1Z-08-02 SAN Volume Controller Device hdisk6 Defined 1Z-08-02 SAN Volume Controller Device hdisk7 Defined 1D-08-02 SAN Volume Controller Device hdisk8 Defined 1D-08-02 SAN Volume Controller Device hdisk10 Defined 1Z-08-02 SAN Volume Controller Device hdisk11 Defined 1Z-08-02 SAN Volume Controller Device hdisk12 Defined 1D-08-02 SAN Volume Controller Device hdisk13 Defined 1D-08-02 SAN Volume Controller Device vpath0 Defined Data Path Optimizer Pseudo Device Driver vpath1 Defined Data Path Optimizer Pseudo Device Driver vpath2 Defined Data Path Optimizer Pseudo Device Driver # for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done hdisk5 deleted hdisk6 deleted hdisk7 deleted hdisk8 deleted hdisk10 deleted hdisk11 deleted hdisk12 deleted hdisk13 deleted #for i in 0 1 2; do rmdev -dl vpath$i -R;done 768
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

vpath0 vpath1 vpath2 #lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #

deleted deleted deleted -Cc disk Available Available Available Available Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1Z-08-02 1Z-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1742-900 (900) Disk Array Device 1742-900 (900) Disk Array Device

Example 9-62 Remove references to old paths using SDDPCM

# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk3 Defined 1D-08-02 MPIO FC 2145 hdisk4 Defined 1D-08-02 MPIO FC 2145 hdisk5 Available 1D-08-02 MPIO FC 2145 # for i in 3 4; do rmdev -dl hdisk$i -R;done hdisk3 deleted hdisk4 deleted # lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk5 Available 1D-08-02 MPIO FC 2145

Disk Drive Disk Drive Disk Drive

Disk Drive Disk Drive Disk Drive

4. If your application and data is on an LVM volume, rediscover the volume group, then run the varyonvg VOLUME_GROUP to activate the volume group. 5. Mount your file systems with the mount /MOUNT_POINT command. 6. You should be ready to start your application. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then will automatically be removed once the SVC determines that there are no VDisks associated with these MDisks.

9.9 Using SVC for storage migration


The primary use of the SVC is not as a storage migration tool. However, the advanced capabilities of the SVC enable us to use the SVC as a storage migration tool. This means that the SVC will be temporarily added to your SAN environment to copy the data from one storage subsystem to the other. The SVC enables you to copy image mode VDisks directly from one subsystem to the other while host I/O is running. The only downtime required is when SVC is added and removed to your SAN environment. To use SVC for migration purposes only, perform the following steps: 1. Add the SVC to your SAN environment. 2. Prepare the SVC.

Chapter 9. Data migration

769

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

3. Depending on your operating system, unmount the selected LUNs or shut down the host. 4. Add the SVC between your storage and the host. 5. Mount the LUNs or start the host again. 6. Start the migration. 7. After the migration process is complete, unmount the selected LUNs or shut down the host. 8. Remove the SVC from your SAN. 9. Mount the LUNs or start the host again. 10.The migration is complete. As you can see, very little downtime is required. If you prepare everything correctly, you are able to reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host does not hinder performance while the migration progresses. To use the SVC for storage migrations, only perform the steps described in the following sections: 1. 9.5.2, SVC added between the host system and the DS4700 on page 689 2. 9.5.6, Migrating the VDisk from image mode to image mode on page 705 3. 9.5.7, Free the data from the SVC on page 709

9.10 Using VDisk Mirroring and Space-Efficient VDisk together


In this section we will show how it is possible to use the VDisk Mirroring feature and Space-Efficient VDisks together in order to move data from a fully allocated VDisk to a Space-Efficient VDisk.

9.10.1 Zero detect


Introduced in SVC 5.1 is zero detect for Space-Efficient VDisks. This enables customers to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient VDisk (SEV) using VDisk Mirroring. To migrate from a fully-allocated to a Space-Efficient VDisk we have to: Add the target space-efficient copy, Wait for synchronization to complete, Remove the source fully-allocated copy Using this feature customers can easily free up managed disk space and make better use of their storage, all accomplished without needing to purchase any additional function for the SVC. VDisk Mirroring and Space-Efficient VDisk functions are included in base virtualization license. Customers with thin-provisioned storage on an existing storage system can migrate their data under SVC management using SEV without having to allocate additional storage space. Zero detect only works if the disk actually contains zeroes an uninitialized disk could contain anything unless the disk was formatted (for example, using the fmtdisk flag on the mkvdisk command).

770

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

Figure 9-75 on page 771 shows the Space-Efficient VDisks zero detect concept.

Figure 9-75 SEV zero detect

Figure 9-76 shows the Space-Efficient VDisk organization.

Chapter 9. Data migration

771

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure 9-76 Space Efficient Vdisk Organization

As shown in Figure 9-76 on page 772, in a Space-Efficient VDisk we can find: Used Capacity: Specifies the portion of real capacity that is being used to store data. For non-space-efficient copies, this value is the same as the VDisk capacity. If the VDisk copy is space-efficient, the value increases from zero to the real capacity value as more of the VDisk is written to. Real Capacity: This is the real allocated space in the MDG. In a Space Efficient VDisk, the value can be different from the Total Capacity. Free Data: Specifies the difference between the real capacity and used capacity values. The SVC is continuously trying to keep equal to the real capacity for contingency. If the free data capacity reaches the used capacity and the VDisk has been configured with the -autoexpand option, the SVC will autoexpand the allocated space for this VDisk in order to keep this value equal to the Real Capacity. Grains: This is the smallest unit in into which the allocated space can be divided. Metadata: This is allocated in the real capacity, and keeps track from used capacity, real capacity and free capacity.

9.10.2 VDisk Mirroring With SEV


In this section we will show an example about how to use the VDisk Mirror feature with SEV. 1. We create a fully allocated VDisk of 15GB named VD_Full as shown in Example 9-63.
Example 9-63 VD_Full creation example

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full Virtual Disk, id [2], successfully created

772

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status offline sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize

2. We then add an SEV copy with the VDisk Mirroring option using the addvdiskcopy command and the autoexpand parameter as shown in Example 9-64.
Example 9-64 addvdiskcopy example

IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full Vdisk [2] copy [1] successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full
Chapter 9. Data migration

773

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync no primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB

774

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

overallocation 4746 autoexpand on warning 80 grainsize 32 As you can see in Example 9-64 on page 773, now the VD_Full has a copy_id 1 where the used_capacity is 0.41MB, that is equal to the metadata because there are only zeros in the disk. The real_capacity is 323.57MB, and that is equal to the -rsize 2% value specified in the addvdiskcopy command, The free capacity is 323.17MB, that is equal to real capacity less used capacity. If zeros are written in the disk, the SEV is not consuming space. Example 9-65 shows that SEV is not consuming space even when they are in sync.
Example 9-65 SEV display

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2 vdisk_id vdisk_name copy_id progress estimated_completion_time 2 VD_Full 0 100 2 VD_Full 1 100 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id
Chapter 9. Data migration

775

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync yes primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 3. Now we can split the VDisk Mirror or remove one of the copies keeping the SEV copy as our valid one using the splitvdiskcopy or rmvdiskcopy command. If you need your copy as an SEV clone we suggest that you use splitvdiskcopy as that will generate a new VDisk and you will be able to map to any server you want. If you need your copy because you are migrating from a previous fully allocated VDisk to go to a SEV without any impact to the server operations, we suggest that you use the rmvdiskcopy command. In this case the original VDisk name will be kept and will remain mapped to the same server. Example 9-66 shows the splitvdiskcopy command.
Example 9-66 splitvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full Virtual Disk, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 0 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000D 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV 776
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000D throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 Example 9-67 shows the rmvdiskcopy command.
Example 9-67 rmvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000B 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2 id 2
Chapter 9. Data migration

777

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 1 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32

9.10.3 Metro Mirror and SEV


In this section we will show an example of how to use Metro Mirror with SEV as a target VDisk. Using Metro Mirror is one way to migrate data when used in an intracluster configuration. Bear in mind that VDisk Mirroring or Migrate VDisk is a concurrent operation, and Metro Mirror will have to be considered as disruptive for data access, when at the end of the migration, we need to map the Metro Mirror Target VDisk to the server. With this example we want to show how it is possible to migrate data with intracluster Metro Mirror using an SEV as a target VDisk, and showing how the real capacity and the free

778

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

capacity changes following the used capacity changing during the Metro Mirror synchronization. 1. We will use a fully allocated VDisk named VD_Full and we will create a Metro Mirror relationship with an SEV named VD_SEV. Example 9-68 shows the two VDisks and the rcrelationship creation.
Example 9-68 VDisks and rcrelationship

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000F 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id 2 RC_name vdisk_UID 60050768018401BF2800000000000010 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name
Chapter 9. Data migration

779

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018401BF280000000000000F throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 307.20MB free_capacity 306.79MB overallocation 5000 autoexpand off warning 1 grainsize 32 IBM_2145:ITSO-CLS2:admin>svctask mkrcrelationship -cluster 0000020061006FCA -master VD_Full -aux VD_SEV -name MM_SEV_rel RC Relationship, id [2], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsrcrelationship MM_SEV_rel

780

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423ch09.Data Migration Angelo.fm

id 2 name MM_SEV_rel master_cluster_id 0000020061006FCA master_cluster_name ITSO-CLS2 master_vdisk_id 2 master_vdisk_name VD_Full aux_cluster_id 0000020061006FCA aux_cluster_name ITSO-CLS2 aux_vdisk_id 7 aux_vdisk_name VD_SEV primary master consistency_group_id consistency_group_name state inconsistent_stopped bg_copy_priority 50 progress 0 freeze_time status online sync copy_type metro 2. Now we start the rcrelationship and we will observe how the space allocation in the target VDisk will change until it reaches the total of used capacity. Example 9-69 shows how to start the rcrelationship and shows the space allocation changing.
Example 9-69 rcrelationship and space allocation

IBM_2145:ITSO-CLS2:admin>svctask startrcrelationship MM_SEV_rel IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no . . type striped mdisk_id mdisk_name fast_write_state not_empty used_capacity 3.64GB real_capacity 3.95GB free_capacity 312.89MB overallocation 380 autoexpand on warning 80 grainsize 32

Chapter 9. Data migration

781

6423ch09.Data Migration Angelo.fm

Draft Document for Review January 12, 2010 4:41 pm

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.02GB real_capacity 15.03GB free_capacity 11.97MB overallocation 99 autoexpand on warning 80 grainsize 32 3. In conclusion, it is possible to use Metro Mirror to migrate data, and we could use an SEV as the target VDisk. However, this doesnt make any sense because at the end of the initial data synchronization, the SEV will allocate as much space as the source (in our case VD_Full). If you want to use Metro Mirror to migrate your data, we suggest to use even for Source an Target VDisk a Fully allocated VDisk.

782

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axa.fm

Appendix A.

Scripting
In this appendix, we present a high-level overview of how to automate different tasks by creating scripts using the SVC command-line interface (CLI).

Copyright IBM Corp. 2009. All rights reserved.

783

6423axa.fm

Draft Document for Review January 12, 2010 4:41 pm

Scripting structure
When creating scripts to automate tasks on the SVC, use the structure illustrated in Figure A-1.

Create connection (SSH) to the SVC

Run the command(s) command

Scheduled activation or Manual activation

Perform logging

Figure A-1 Scripting structure for SVC task automation

Creating a connection (SSH) to the SVC


When creating a connection to the SVC, if you are running the script, you must have access to a private key that corresponds to a public key previously uploaded to the SVC. The private key is used to establish the SSH connection needed to use the CLI on the SVC. If the SSH keypair is generated without a passphrase, you can connect without the need of special scripting to parse in the passphrase. On UNIX systems, the ssh command can be used to create an SSH connection with SVC. On Windows systems, a utility called plink.exe, which is provided with the PuTTY tool, can be used for this. In the following examples, we will use plink to create the SSH connection to the SVC.

Executing the command(s)


When using the CLI, you can use the examples in Chapter 7, SVC operations using the CLI on page 337 for inspiration, or refer to the IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, which can be downloaded from the SVC documentation page for each SVC code-level at: http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5 000033&familyind=5329743&taskind=1

Performing logging
When using the CLI, not all commands provide a usable response to determine the status of the invoked command. Therefore, we recommend that you always create checks that can be logged for monitoring and troubleshooting purposes.

784

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axa.fm

Automated VDisk creation


In the following example, we create a simple bat script to be used to automate VDisks creation to illustrate how scripts are created. Creating scripts to automate SVC administration tasks is not limited to bat scripting, and you can, in principle, encapsulate the CLI commands in scripts using any programming language you prefer, or even program applets to be used to perform routine tasks.

Connecting to the SVC using a predefined SSH connection


The easiest way to create a SSH connection to the SVC is when plink can call a predefined PuTTY session, as shown in Figure A-2 on page 786. Define a session, including: The Auto-login user name, and setting that to your SVC admin user name (for example, admin). This parameter is set under the Connection Data category. The Private key for authentication (for example, icat.ppk). This is the private key that you have already created. This parameter is set under the Connection Session Auth category. The IP address of the SVC cluster. This parameter is set under the Session category. A session name. Our example uses SVC:cluster1. Your version of PuTTY might have these parameters set in different categories.

Appendix A. Scripting

785

6423axa.fm

Draft Document for Review January 12, 2010 4:41 pm

Figure A-2 Using a predefined SSH connection with plink

To use this predefined PuTTY session, the syntax is: plink SVC1:cluster1 If a predefined PuTTY session is not used, the syntax is: plink admin@9.43.36.117 -i "C:\DirectoryPath\KeyName.PPK"

Creating VDisks command using the CLI


In our example, we decided the following parameters are variables when creating the VDisks: VDisk size (in GB): %1 VDisk name: %2 Managed Disk Group (MDG): %3 Use the following command: svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3

786

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axa.fm

Listing created VDisks


To log the fact that our script created the VDisk we defined when executing the script, we use the -filtervalue parameter as follows: svcinfo lsvdisk -filtervalue 'name=%2' >> C:\DirectoryPath\VDiskScript.log

Invoking the sample script VDiskScript.bat


Finally, putting it all together, our sample bat script for creating a VDisk is created, as shown in Figure A-3.

-------------------------------------VDiskScript.bat--------------------------plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3 plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=%2' >> E:\SVC_Jobs\VDiskScript.log ------------------------------------------------------------------------------Figure A-3 VDiskScript.bat

Using the script, we now create a VDisk with the following parameters: VDisk size (in GB): 20 (%1) VDisk name: Host1_F_Drive (%2) Managed Disk Group (MDG): 1 (%3) This is illustrated in Example A-1.
Example: A-1 Executing the script to create the VDisk

E:\SVC_Jobs>VDiskScript 4 Host1_E_Drive 1 E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size 4 -unit gb -name Host1_E_Drive -mdiskgrp 1 Virtual Disk, id [32], successfully created E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svcinfo lsvdisk -filtervalue 'name=Host 1_E_Drive' 1>>E:\SVC_Jobs\VDiskScript.log From the output of the log, as shown in Example A-2, we verify that the VDisk is created as intended.
Example: A-2 Logfile output from VDiskScript.bat

id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count 32 Host1_E_Drive 0 io_grp0 online 1 MDG_DS47 4.0GB striped 60050768018301BF280000000000002E 0 1

Appendix A. Scripting

787

6423axa.fm

Draft Document for Review January 12, 2010 4:41 pm

SVC tree
Here is another example of using scripting to talk to the SVC. This script displays a tree-like structure for the SVC, as shown in Example A-3. The script has been written in Perl, and should work without modification using Perl on UNIX systems (such as AIX or Linux), Perl for Windows, or Perl in a Windows Cygwin environment.
Example: A-3 SVC Tree script output

$ ./svctree.pl 10.0.1.119 admin /cygdrive/c/Keys/icat.ssh + ITSO-CLS2 (10.0.1.119) + CONTROLLERS + DS4500 (0) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + DS4700 (1) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + MDISK GROUPS + MDG_0_DS45 (ID: 0 CAP: 144.0GB FREE: 120.0GB) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + aix_imgmdg (ID: 7 CAP: 13.0GB FREE: 3.0GB) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + iogrp0 (0) + NODES + Node2 (5) + Node1 (2) + HOSTS + W2k8 (0) + Senegal (1) + VSS_FREE (2) + msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped STAT: online) + msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped STAT: online) + VSS_RESERVED (3) + Kanaga (5) + A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many STAT: online) + VDISKS + MDG_SE_VDisk3 (ID: 0 CAP: 10.2GB TYPE: many) + mdisk2 (ID: 10 CAP: 36.0GB MODE: managed CONT: DS4500) + mdisk_3 (ID: 12 CAP: 36.0GB MODE: managed CONT: DS4500) + A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed CONT: DS4700) + Kanaga_AIX1 (ID: 24 CAP: 8.0GB MODE: managed CONT: DS4700) 788
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axa.fm

+ msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped) + mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: + mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: + msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped) + mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: + mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: iogrp1 (1) + NODES + HOSTS + VDISKS iogrp2 (2) + NODES + HOSTS + VDISKS iogrp3 (3) + NODES + HOSTS + VDISKS recovery_io_grp (4) + NODES + HOSTS + VDISKS recovery_io_grp (4) + NODES + HOSTS + itsosvc1 (2200642269468) + VDISKS

DS4500) DS4500) DS4500) DS4500)

Example A-4 shows the coding for our script.


Example: A-4 svctree.pl

#!/usr/bin/perl $SSHCLIENT = ssh; # (plink or ssh) $HOST = $ARGV[0]; $USER = ($ARGV[1] ? $ARGV[1] : admin); $PRIVATEKEY = ($ARGV[2] ? $ARGV[2] : /path/toprivatekey); $DEBUG = 0; die(sprintf(Please call script with cluster IP address. The syntax is: \n%s ipaddress loginname privatekey\n,$0)) if (! $HOST); sub TalkToSVC() { my $COMMAND = shift; my $NODELIM = shift; my $ARGUMENT = shift; my @info; if ($SSHCLIENT eq plink || $SSHCLIENT eq ssh) { $SSH = sprintf(%s -i %s %s@%s ,$SSHCLIENT,$PRIVATEKEY,$USER,$HOST); } else { die (ERROR: Unknown SSHCLIENT [$SSHCLIENT]\n);
Appendix A. Scripting

789

6423axa.fm

Draft Document for Review January 12, 2010 4:41 pm

} if ($NODELIM) { $CMD = $SSH svcinfo $COMMAND $ARGUMENT\n; } else { $CMD = $SSH svcinfo $COMMAND -delim : $ARGUMENT\n; } print Running $CMD if ($DEBUG); open SVC,$CMD|; while (<SVC>) { print Got [$_]\n if ($DEBUG); chomp; push(@info,$_); } close SVC; return @info; } sub DelimToHash() { my $COMMAND = shift; my $MULTILINE = shift; my $NODELIM = shift; my $ARGUMENT = shift; my %hash; @details = &TalkToSVC($COMMAND,$NODELIM,$ARGUMENT); print $COMMAND: Got [,join(|,@details).]\n if ($DEBUG); my $linenum = 0; foreach (@details) { print $linenum, $_ if ($debug); if ($linenum == 0) { @heading = split(:,$_); } else { @line = split(:,$_); $counter = 0; foreach $id (@heading) { printf($COMMAND: ID [%s], value [%s]\n,$id,$line[$counter]) if ($DEBUG); if ($MULTILINE) { $hash{$linenum,$id} = $line[$counter++]; } else { $hash{$id} = $line[$counter++]; } } } $linenum++; }

790

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axa.fm

return %hash; } sub TreeLine() { my $indent = shift; my $line = shift; my $last = shift; for ($tab=1;$tab<=$indent;$tab++) { print ; } if (! $last) { print + $line\n; } else { print | $line\n; } } sub TreeData() { my $indent = shift; my $printline = shift; *data = shift; *list = shift; *condition = shift; my $item; foreach $item (sort keys %data) { @show = (); ($numitem,$detail) = split($;,$item); next if ($numitem == $lastnumitem); $lastnumitem = $numitem; printf(CONDITION:SRC [%s], DST [%s], DSTVAL [%s]\n,$condition{SRC},$condition{DST},$data{$numitem,$condition{DST}}) if ($DEBUG); next if (($condition{SRC} && $condition{DST}) && ($condition{SRC} != $data{$numitem,$condition{DST}})); foreach (@list) { push(@show,$data{$numitem,$_}) } &TreeLine($indent,sprintf($printline,@show),0); } } # Gather our cluster information. %clusters = &DelimToHash(lscluster,1); %iogrps = &DelimToHash(lsiogrp,1); %nodes = &DelimToHash(lsnode,1); %hosts = &DelimToHash(lshost,1); %vdisks = &DelimToHash(lsvdisk,1); %mdisks = &DelimToHash(lsmdisk,1); %controllers = &DelimToHash(lscontroller,1);

Appendix A. Scripting

791

6423axa.fm

Draft Document for Review January 12, 2010 4:41 pm

%mdiskgrps = &DelimToHash(lsmdiskgrp,1); # We are now ready to display it. # CLUSTER $indent = 0; foreach $cluster (sort keys %clusters) { ($numcluster,$detail) = split($;,$cluster); next if ($numcluster == $lastnumcluster); $lastnumcluster = $numcluster; next if ("$clusters{$numcluster,'location'}" =~ /remote/); &TreeLine($indent,sprintf(%s (%s),$clusters{$numcluster,name},$clusters{$numcluster,cluster_IP_address}),0 ); # CONTROLLERS &TreeLine($indentiogrp+1,CONTROLLERS,0); $lastnumcontroller = ; foreach $controller (sort keys %controllers) { $indentcontroller = $indent+2; ($numcontroller,$detail) = split($;,$controller); next if ($numcontroller == $lastnumcontroller); $lastnumcontroller = $numcontroller; &TreeLine($indentcontroller, sprintf(%s (%s), $controllers{$numcontroller,controller_name}, $controllers{$numcontroller,id}) ,0); # MDISKS &TreeData($indentcontroller+1, %s (ID: %s CAP: %s MODE: %s), *mdisks, [name,id,capacity,mode], {SRC=>$controllers{$numcontroller,controller_name},DST=>controller_name}); } # MDISKGRPS &TreeLine($indentiogrp+1,MDISK GROUPS,0,[]); $lastnummdiskgrp = ; foreach $mdiskgrp (sort keys %mdiskgrps) { $indentmdiskgrp = $indent+2; ($nummdiskgrp,$detail) = split($;,$mdiskgrp); next if ($nummdiskgrp == $lastnummdiskgrp); $lastnummdiskgrp = $nummdiskgrp; &TreeLine($indentmdiskgrp, sprintf(%s (ID: %s CAP: %s FREE: %s), $mdiskgrps{$nummdiskgrp,name}, $mdiskgrps{$nummdiskgrp,id}, $mdiskgrps{$nummdiskgrp,capacity},

792

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axa.fm

$mdiskgrps{$nummdiskgrp,free_capacity}) ,0); # MDISKS &TreeData($indentcontroller+1, %s (ID: %s CAP: %s MODE: %s), *mdisks, [name,id,capacity,mode], {SRC=>$mdiskgrps{$nummdiskgrp,id},DST=>mdisk_grp_id}); } # IOGROUP $lastnumiogrp = ; foreach $iogrp (sort keys %iogrps) { $indentiogrp = $indent+1; ($numiogrp,$detail) = split($;,$iogrp); next if ($numiogrp == $lastnumiogrp); $lastnumiogrp = $numiogrp; &TreeLine($indentiogrp,sprintf(%s (%s),$iogrps{$numiogrp,name},$iogrps{$numiogrp,id}),0); $indentiogrp++; # NODES &TreeLine($indentiogrp,NODES,0); &TreeData($indentiogrp+1, %s (%s), *nodes, [name,id], {SRC=>$iogrps{$numiogrp,id},DST=>IO_group_id}); # HOSTS &TreeLine($indentiogrp,HOSTS,0); $lastnumhost = ; %iogrphosts = &DelimToHash(lsiogrphost,1,0,$iogrps{$numiogrp,id}); foreach $host (sort keys %iogrphosts) { my $indenthost = $indentiogrp+1; ($numhost,$detail) = split($;,$host); next if ($numhost == $lastnumhost); $lastnumhost = $numhost; &TreeLine($indenthost, sprintf(%s (%s),$iogrphosts{$numhost,name},$iogrphosts{$numhost,id}), 0); # HOSTVDISKMAP %vdiskhostmap = &DelimToHash(lshostvdiskmap,1,0,$hosts{$numhost,id}); $lastnumvdisk = ; foreach $vdiskhost (sort keys %vdiskhostmap) { ($numvdisk,$detail) = split($;,$vdiskhost);

Appendix A. Scripting

793

6423axa.fm

Draft Document for Review January 12, 2010 4:41 pm

next if ($numvdisk == $lastnumvdisk); $lastnumvdisk = $numvdisk; next if ($vdisks{$numvdisk,IO_group_id} != $iogrps{$numiogrp,id}); &TreeData($indenthost+1, %s (ID: %s CAP: %s TYPE: %s STAT: %s), *vdisks, [name,id,capacity,type,status], {SRC=>$vdiskhostmap{$numvdisk,vdisk_id},DST=>id}); } } # VDISKS &TreeLine($indentiogrp,VDISKS,0); $lastnumvdisk = ; foreach $vdisk (sort keys %vdisks) { my $indentvdisk = $indentiogrp+1; ($numvdisk,$detail) = split($;,$vdisk); next if ($numvdisk == $lastnumvdisk); $lastnumvdisk = $numvdisk; &TreeLine($indentvdisk, sprintf(%s (ID: %s CAP: %s TYPE: %s), $vdisks{$numvdisk,name}, $vdisks{$numvdisk,id}, $vdisks{$numvdisk,capacity}, $vdisks{$numvdisk,type}), 0) if ($iogrps{$numiogrp,id} == $vdisks{$numvdisk,IO_group_id}); # VDISKMEMBERS if ($iogrps{$numiogrp,id} == $vdisks{$numvdisk,IO_group_id}) { %vdiskmembers = &DelimToHash(lsvdiskmember,1,1,$vdisks{$numvdisk,id}); foreach $vdiskmember (sort keys %vdiskmembers) { &TreeData($indentvdisk+1, %s (ID: %s CAP: %s MODE: %s CONT: %s), *mdisks, [name,id,capacity,mode,controller_name], {SRC=>$vdiskmembers{$vdiskmember},DST=>id}); } } } } }

794

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axa.fm

Scripting alternatives
For an alternative to scripting, visit the Tivoli Storage Manager for Advanced Copy Services product page: http://www.ibm.com/software/tivoli/products/storage-mgr-advanced-copy-services/ Additionally, IBM provides a suite of scripting tools based on Perl. These can be downloaded from: http://www.alphaworks.ibm.com/tech/svctools

Appendix A. Scripting

795

6423axa.fm

Draft Document for Review January 12, 2010 4:41 pm

796

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axb.fm

Appendix B.

Node replacement
In this appendix, we discuss the process to replace nodes. For the latest information about replacing a node, refer to the development page at one of the following sites: IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437 Business Partners (login required): http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD104437 Clients: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437

Copyright IBM Corp. 2009. All rights reserved.

797

6423axb.fm

Draft Document for Review January 12, 2010 4:41 pm

Replacing nodes nondisruptively


You can replace SAN Volume Controller 2145-4F2, SAN Volume Controller 2145-8F2, or SAN Volume Controller 2145-8F4 nodes with SAN Volume Controller 2145-8G4 nodes in an existing, active cluster without having an outage on the SVC or on your host applications. This procedure does not require that you change your SAN environment because the replacement (new) node uses the same worldwide node name (WWNN) as the node you are replacing. In fact, you can use this procedure to replace any model node with a different model node. This task assumes that the following conditions exist: The cluster software is at V4.2.0 or higher for older to newer model node replacements, the exception being the 2145-8G4 model node, which requires the cluster to be running V4.2.0 or higher. The new nodes that are configured are not powered on and not connected. All nodes that are configured in the cluster are present. All errors in the cluster error log are fixed. There are no VDisks, MDisks, or controllers with a status of degraded or offline. The SVC configuration has been backed up through the CLI or GUI and the file saved to the master console. Download, install, and run the latest SVC Software Upgrade Test Utility from http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 to verify there are no known issues with the current cluster environment before beginning the node upgrade procedure. You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new SAN Volume Controller 2145-8G4 node. Note: If you are planning to redeploy the old nodes in your environment to create a test cluster or to add to another cluster, you must ensure the WWNN of these old nodes are set to a unique number on your SAN. The recommendation is to document the factory WWNN of the new nodes you are using to replace the old nodes and in effect swap the WWNN so each node still has a unique number. Failure to do this could lead to a duplicate WWNN and WWPN causing unpredictable SAN problems. Perform the following steps to replace the nodes: 1. Perform the following steps to determine the node_name or node_id of the node you want to replace, the iogroup_id or iogroup_name it belongs to, and to determine which of the nodes is the configuration node. If the configuration node is to be replaced, it is recommended that it be upgraded last. If you already can identify which physical node equates to a node_name or node_id, the iogroup_id or iogroup_name it belongs to, and which node is the configuration node, then you can skip this step and proceed to step 2 below. a. Issue the following command from the command-line interface (CLI): svcinfo lsnode -delim : b. Under the column config node, look for the status of yes and record the node_name or node_id of this node for later use. c. Under the columns id and name, record the node_name or node_id of all the other nodes in the cluster. d. Under the columns IO_group_id and IO_group_name, record the iogroup_id or iogroup_name for all the nodes in the cluster. 798
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axb.fm

e. Issue the following command from the CLI for each node_name or node_id to determine the front_panel_id for each node and record the ID. This front_panel_id is physically located on the front of every node (it is not the serial number) and you can use this to determine which physical node equates to the node_name or node_id you plan to replace: svcinfo lsnodevpd node_name or node_id 2. Perform the following steps to record the WWNN of the node that you want to replace: a. Issue the following command from the CLI, where node_name or node_id is the name or ID of the node for which you want to determine the WWNN: svcinfo lsnode -delim : node_name or node_id b. Record the WWNN of the node that you want to replace. 3. Verify that all VDisks, MDisks, and disk controllers are online and none are in a state of Degraded. If there are any in this state, then resolve this issue before going forward or loss of access to data may occur when you perform step 4 below. This is an especially important step if this is the second node in the I/O group to be replaced. a. Issue the following commands from the CLI, where object_id or object_name is the controller ID or controller name that you want to view. Verify that each disk controller shows its status as degraded no. svcinfo lsvdisk -filtervalue status=degraded svcinfo lsmdisk -filtervalue status=degraded svcinfo lscontroller object_id or object_name 4. Issue the following CLI command to shut down the node that will be replaced, where node_name or node_id is the name or ID of the node that you want to delete: svctask stopcluster -node node_name or node_id Attention: Do not power off the node through the front panel in lieu of using the above command. Be careful you do not issue the stopcluster command without the -node node_name or node_id parameters, as the entire cluster will be shut down if you do. Issue the following CLI command to ensure that the node is shut down and the status is offline, where node_name or node_id is the name or ID of the original node. The node status should be offline: svcinfo lsnode node_name or node_id 5. Issue the following CLI command to delete this node from the cluster and I/O group, where node_name or node_id is the name or ID of the node that you want to delete: svctask rmnode node_name or node_id 6. Issue the following CLI command to ensure that the node is no longer a member of the cluster, where node_name or node_id is the name or ID of the original node. The node should not be listed in the command output: svcinfo lsnode node_name or node_id

Appendix B. Node replacement

799

6423axb.fm

Draft Document for Review January 12, 2010 4:41 pm

7. Perform the following steps to change the WWNN of the node that you just deleted to FFFFF: Attention: Record and mark the Fibre Channel cables with the SVC node port number (1-4) before removing them from the back of the node being replaced. You must reconnect the cables on the new node exactly as they were on the old node. Looking at the back of the node, the Fibre Channel ports on the SVC nodes are numbered 1-4 from left to right and must be reconnected in the same order or the port IDs will change, which could impact hosts access to VDisks or cause problems with adding the new node back into the cluster. The SVC Hardware Installation Guide shows the port numbering of the various node models. Failure to disconnect the fibre cables now will likely cause SAN devices and SAN management software to discover these new WWPNs generated when the WWNN is changed to FFFFF in the following steps. This may cause ghost records to be seen once the node is powered down. These do not necessarily cause a problem, but may require a reboot of a SAN device to clear out the record. In addition, it may cause problems with AIX dynamic tracking functioning correctly, assuming it is enabled, so we highly recommend disconnecting the nodes fibre cables as instructed in step a below before continuing on to any other steps. a. Disconnect the four Fibre Channel cables from this node before powering the node on in the next step. b. Power on this node using the power button on the front panel and wait for it to boot up before going to the next step. c. From the front panel of the node, press the down button until the Node: panel is displayed and then use the right and left navigation buttons to display the Status: panel. d. Press and hold the down button, press and release the select button, and then release the down button. The WWNN of the node is displayed. e. Press and hold the down button, press and release the select button, and then release the down button to enter the WWNN edit mode. The first character of the WWNN is highlighted. f. Press the up or down button to increment or decrement the character that is displayed. Note: The characters wrap F to 0 or 0 to F. g. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step f for each field. At the end of this step, the characters that are displayed must be FFFFF. h. Press the select button to retain the characters that you have updated and return to the WWNN screen. i. Press the select button again to apply the characters as the new WWNN for the node. Note: You must press the select button twice as steps h and i instruct you to do. After step h, it may appear that the WWNN has been changed, but step i actually applies the change. 8. Power off this node using the power button on the front panel and remove the node from the rack if desired. 800
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axb.fm

9. Install the replacement node and its UPS in the rack and connect the node to UPS cables according to the SVC Hardware Installation Guide available at: http://www.ibm.com/storage/support/2145 Note: Do not connect the Fibre Channel cables to the new node during this step. 10.Power on the replacement node from the front panel with the Fibre Channel cables disconnected. Once the node has booted, ensure the node displays Cluster: on the front panel and nothing else. If something other then this is displayed, contact IBM support for assistance before continuing. 11.Record the WWNN of this new node, as you will need it if you plan to redeploy the old nodes being replaced. Perform the following steps to change the WWNN of the replacement node to match the WWNN that you recorded in step 2 on page 799: a. From the front panel of the node, press the down button until the Node: panel is displayed and then use the right and left navigation buttons to display the Status: panel. b. Press and hold the down button, press and release the select button, and then release the down button. The WWNN of the node is displayed. Record this number for use in redeployment of the old nodes. c. Press and hold the down button, press and release the select button, and then release the down button to enter the WWNN edit mode. The first character of the WWNN is highlighted. d. Press the up or down button to increment or decrement the character that is displayed. e. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step d for each field. At the end of this step, the characters that are displayed must be the same as the WWNN you recorded in step 2 on page 799. f. Press the select button to retain the characters that you have updated and return to the WWNN panel. g. Press the select button to apply the characters as the new WWNN for the node. Note: You must press the select button twice as steps f and g instruct you to do. After step f, it may appear that the WWNN has been changed, but step g actually applies the change. h. The node should display Cluster: on the front panel and is now ready to begin the process of adding the node to the cluster. If something other then this is displayed, contact IBM support for assistance before continuing. 12.Connect the Fibre Channel cables to the same port numbers on the new node as they were originally on the old node. See step 7 on page 800. Note: Do not connect the new nodes to different ports at the switch or director, as this will cause port IDs to change, which could impact hosts access to VDisks or cause problems with adding the new node back into the cluster. The new nodes have 4 Gbps HBAs in them, and the temptation is to move them to 4 Gbps switch/director ports at the same time, but this is not recommended while doing the hardware node upgrade. Moving the node cables to faster ports on the switch/director is a separate process that needs to be planned independently of upgrading the nodes in the cluster.

Appendix B. Node replacement

801

6423axb.fm

Draft Document for Review January 12, 2010 4:41 pm

13.Issue the following CLI command to verify that the last five characters of the WWNN are correct: svcinfo lsnodecandidate Note: If the WWNN does not match the original nodes WWNN exactly as recorded in step 2 on page 799, you must repeat step 11 on page 801. 14.Add the node to the cluster and ensure it is added back to the same I/O group as the original node. Using the following command, where wwnn_arg and iogroup_name or iogroup_id are the items you recorded in steps 1 on page 798 and 2 on page 799. svctask addnode -wwnodename wwnn_arg -iogrp iogroup_name or iogroup_id 15.Verify that all the VDisks for this I/O group are back online and are no longer degraded. If the node replacement process is being done disruptively, such that no I/O is occurring to the I/O group, you still need to wait some period of time (we recommend 30 minutes in this case too) to make sure the new node is back online and available to take over before you do the next node in the I/O group. See step 3 on page 799. Both nodes in the I/O group cache data; however, the cache sizes are asymmetric if the remaining partner node in the I/O group is a SAN Volume Controller 2145-4F2 node. The replacement node is limited by the cache size of the partner node in the I/O group in this case. Therefore, the replacement node does not utilize the full 8 GB cache size until the other 2145-4F2 node in the I/O group is replaced. You do not have to reconfigure the host multipathing device drivers because the replacement node uses the same WWNN and WWPNs as the previous node. The multipathing device drivers should detect the recovery of paths that are available to the replacement node. The host multipathing device drivers take approximately 30 minutes to recover the paths. Therefore, do not upgrade the other node in the I/O group for at least 30 minutes after successfully upgrading the first node in the I/O group. If you have other nodes in other I/O groups to upgrade, you can perform that upgrade while you wait the 30 minutes noted above. 16.Repeat steps 2 on page 799 to 15 for each node you want to replace.

Expanding an existing SVC cluster


In this section, we describe how to expand an existing SVC cluster with new nodes. An SVC cluster can only be expanded with node pairs, which means you always have to add at least two nodes to your existing cluster. The maximum number of nodes is eight. This task assumes the following situation: Your cluster contains six or less nodes. All nodes that are configured in the cluster are present. All errors in the cluster error log are fixed. All managed disks (MDisks) are online. You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new SAN Volume Controller 2145-8G4 node. There are no VDisks, MDisks, or controllers with a status of degraded or offline. The SVC configuration has been backed up through the CLI or GUI and the file saved to the master console.

802

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axb.fm

Download, install, and run the latest SVC Software Upgrade Test Utility from http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 to verify there are no known issues with the current cluster environment before beginning the node upgrade procedure. Perform the following steps to add nodes to an existing cluster: 1. Depending on the model of node being added, it may be necessary to upgrade the existing SVC cluster software to a level that supports the hardware model: The model 2145-8G4 requires Version 4.2.x or later. The model 2145-8F4 requires Version 4.1.x or later. The model 2145-8F2 requires Version 3.1.x or later. The 2145-4F2 is the original model and thus is supported by Version 1 through Version 4. It is highly recommended that the existing cluster be upgraded to the latest level of SVC software available; however, the minimum level of SVC cluster software recommended for the 4F2 is Version 3.1.0.5. 2. Install additional nodes and UPSs in a rack. Do not connect to the SAN at this time. 3. Ensure each node being added has a unique WWNN. Duplicate WWNNs may cause serious problems on a SAN and must be avoided. Below is an example of how this could occur: The nodes came from cluster ABC where they were replaced by brand new nodes. The procedure to replace these nodes in cluster ABC required each brand new nodes WWNN to be changed to the old nodes WWNN. Adding these nodes now to the same SAN will cause duplicate WWNNs to appear with unpredictable results. You will need to power up each node separately while disconnected from the SAN and use the front panel to view the current WWNN. If necessary, change it to something unique on the SAN. If required, contact IBM Support for assistance before continuing. 4. Power up additional UPSs and nodes. Do not connect to the SAN at this time. 5. Ensure each node displays Cluster: on the front panel and nothing else. If something other then this is displayed, contact IBM Support for assistance before continuing. 6. Connect additional nodes to LAN. 7. Connect additional nodes to SAN fabric(s). Attention: Do not add the additional nodes to the existing cluster before the zoning and masking steps below are completed or SVC will enter a degraded mode and log errors with unpredictable results. 8. Zone additional node ports in the existing SVC only zone(s). There should be a SVC zone in each fabric with nothing but the ports from the SVC nodes in it. These zones are needed for initial formation of the cluster, as nodes need to see each other to form a cluster. This zone may not exist and the only way the SVC nodes see each other is through a storage zone that includes all the node ports. However, it is highly recommended to have a separate zone in each fabric with just the SVC node ports included to avoid the possibility of the nodes losing communication with each other if the storage zone(s) are changed or deleted. 9. Zone new node ports in existing SVC/Storage zone(s). There should be a SVC/Storage zone in each fabric for each disk subsystem used with SVC. Each zone should have all the SVC ports in that fabric along with all the disk subsystem ports in that fabric that will be used by SVC to access the physical disks.

Appendix B. Node replacement

803

6423axb.fm

Draft Document for Review January 12, 2010 4:41 pm

Note: There are exceptions when EMC DMX/Symmetrix or HDS storage is involved. For further information, review the SVC Software Installation and Configuration Guide, available at: http://www.ibm.com/storage/support/2145 10.On each disk subsystem seen by the SVC, use its management interface to map LUNs that are currently used by SVC to all the new WWPNs of the new nodes that will be added to the SVC cluster. This is a critical step, as the new nodes must see the same LUNs as the existing SVC cluster nodes see before adding the new nodes to the cluster, otherwise problems may arise. Also note that all SVC ports zoned with the back-end storage must see all the LUNs presented to SVC through all those same storage ports or SVC will mark the devices as degraded. 11.Once all the above is done, then you can add the additional nodes to the cluster using the SVC GUI or CLI and the cluster should not mark anything degraded, as the new nodes will see the same cluster configuration, the same storage zoning, and the same LUNs as the existing nodes. 12.Check the status of the controller(s) and MDisks to ensure there is nothing marked degraded. If there is, then something is not configured properly, and this needs to be addressed immediately before doing anything else to the cluster. If it cannot be determined fairly quickly what is wrong, remove the newly added nodes from the cluster until the problem is resolved. You can contact IBM Support for assistance.

Moving VDisks to a new I/O group


Once new nodes are added to a cluster, you may want to move VDisk ownership from one I/O group to another to balance the workload. This is currently a disruptive process. The host applications will have to be quiesced during the process. The actual moving of the VDisk in SVC is simple and quick; however, some host operating systems may need to have their file systems and volume groups varied off or removed along with their disks and multiple paths to the VDisks deleted and rediscovered. In effect, it would be the equivalent of discovering the VDisks again as when they were initially brought under SVC control. This is not a difficult process, but can take some time to complete, so you must plan accordingly. This task assumes the following situation: All steps described in Expanding an existing SVC cluster on page 802 are completed. All nodes that are configured in the cluster are present. All errors in the cluster error log are fixed. All managed disks (MDisks) are online. There are no VDisks, MDisks, or controllers with a status of degraded or offline. The SVC configuration has been backed up through the CLI or GUI and the file saved to the master console. Perform the following steps to move the VDisks: 1. Stop the host I/O. 2. Vary off your file system or shut down your host, depending on your operating system. 3. Move all of the VDisks from the I/O group of the nodes you are replacing to the new I/O group.

804

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axb.fm

4. If you had your host shut down, start it again. 5. From each host, issue a rescan of the multipathing software to discover the new paths to the VDisks. 6. See the documentation that is provided with your multipathing device driver for information about how to query paths to ensure that all paths have been recovered. 7. Vary on your file system. 8. Restart the host I/O. 9. Repeat steps 1 to 8 for each vdisk in the cluster that you want to replace.

Replacing nodes disruptively (rezoning the SAN)


You can replace SAN Volume Controller 2145-4F2, SAN Volume Controller 2145-8F2, or SAN Volume Controller 2145-8F4 nodes with SAN Volume Controller 2145-8G4 nodes. This task disrupts your environment because you must rezone your SAN and the host multipathing device drivers must discover new paths. Access to virtual disks (VDisks) is lost during this task. In fact, you can use this procedure to replace any model node with different model node. This task assumes that the following conditions exist: The cluster software is at V4.2.0 or higher. All nodes that are configured in the cluster are present. The new nodes that are configured are not powered on and not connected. All errors in the cluster error log are fixed. All managed disks (MDisks) are online. You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new SAN Volume Controller 2145-8G4 node. There are no VDisks, MDisks, or controllers with a status of degraded or offline. The SVC configuration has been backed up through the CLI or GUI and the file saved to the master console. Download, install, and run the latest SVC Software Upgrade Test Utility from http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 to verify there are no known issues with the current cluster environment before beginning the node upgrade procedure. Perform the following steps to replace nodes: 1. Quiesce all I/O from the hosts that access the I/O group of the node that you are replacing. 2. Delete the node that you want to replace from the cluster and I/O group. Note: The node is not deleted until the SAN Volume Controller cache is destaged to disk. During this time, the partner node in the I/O group transitions to write through mode. You can use the command-line interface (CLI) or the SAN Volume Controller Console to verify that the deletion process has completed. 3. Ensure that the node is no longer a member of the cluster. 4. Power off the node and remove it from the rack.

Appendix B. Node replacement

805

6423axb.fm

Draft Document for Review January 12, 2010 4:41 pm

5. Install the replacement (new) node in the rack and connect the uninterruptible power supply (UPS) cables and the Fibre Channel cables. 6. Power on the node. 7. Rezone your switch zones to remove the ports of the node that you are replacing from the host and storage zones. Replace these ports with the ports of the replacement node. 8. Add the replacement node to the cluster and I/O group. Important: Both nodes in the I/O group cache data; however, the cache sizes are asymmetric. The replacement node is limited by the cache size of the partner node in the I/O group. Therefore, the replacement node does not utilize the full size of its cache. 9. From each host, issue a rescan of the multipathing software to discover the new paths to VDisks. Note: If your system is inactive, you can perform this step after you have replaced all nodes in the cluster. The host multipathing device drivers take approximately 30 minutes to recover the paths. 10.Refer to the documentation that is provided with your multipathing device driver for information about how to query paths to ensure that all paths have been recovered before proceeding to the next step. 11.Repeat steps 1 to 10 for the partner node in the I/O group. Note: After you have upgraded both nodes in the I/O group, the cache sizes are symmetric and the full 8 GB of cache is utilized. 12.Repeat steps 1 to 11 for each node in the cluster that you want to replace. 13.Resume host I/O.

806

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axc.fm

Appendix C.

Performance data and statistics gathering


It is not the intent of this redbook to describe performance data and statistics gathering in-depth. We show a method to process statistics that we have gathered. For a more comprehensive look at the performance of the SVC we recommend this redbook: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521 http://www.redbooks.ibm.com/abstracts/sg247521.html?Open Although written at an SVC V4.3.x level, many of the underlying principles remain applicable to SVC 5.1.

Copyright IBM Corp. 2009. All rights reserved.

807

6423axc.fm

Draft Document for Review January 12, 2010 4:41 pm

SVC performance overview


While storage virtualization with SVC improves flexibility and provides simpler management of a storage infrastructure, it can also provide a substantial performance advantage for a variety of workloads. The SVCs caching capability and its ability to stripe VDisks across multiple disk arrays is the reason why performance improvement is significant when implemented with midrange disk subsystems, since this technology is often only provided with high-end enterprise disk subsystems. To ensure the desired performance and capacity of your storage infrastructure, time to time, we recommend that you do a performance and capacity analysis to reveal the business requirements of your storage environment.

Performance considerations
When discussing performance for a system, it always comes down to identifying the bottleneck, and thereby the limiting factor of a given system. At the same time, you must take into consideration the component for whose workload you do identify a limiting factor, since it might not be the same component that is identified as the limiting factor for different workloads. When designing a storage infrastructure using SVC, or using an SVC storage infrastructure, you must therefore take into consideration the performance and capacity of your infrastructure, enche monitoring your SVC environment is a key point to be achieved in order to grant and sustain the performance required.

SVC
The SVC cluster is scalable up to eight nodes, and the performance is almost linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back end without overloading a single disk or array. In the following sections, we discuss the performance of the SVC and assume that there are no bottlenecks in the SAN or on the disk subsystem.

Performance monitoring
In this section, we discuss some performance monitoring techniques.

Collecting performance statistics


By default, performance statistics are not collected. You can start or stop performance collection by using the svctask startstats and svctask stopstats command, as described in Chapter 7, SVC operations using the CLI on page 337. You can also start or stop performance collection by using the SVC GUI as described Chapter 8, SVC operations using the GUI on page 469.

808

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axc.fm

Statistics gathering is enabled or disabled on a cluster basis. When gathering is enabled, all nodes in the cluster gather statistics. SVC supports sampling periods of the gathering of statistics from 1 to 60 minutes in steps of one minute. Previous versions of SVC used to provide per cluster statistics. These were later superseded by per node statistics, which provide a greater range of information. SVC 5.1.0 onwards only provides per node statistics; per cluster statistics are no longer generated. Customers should use per node statistics instead.

Statistics file naming


The files generated are written to the directory /dumps/iostats/ The file name is of the format: Nm_stats_<node_frontpanel_id>_<date>_<time> for mdisk statistics Nv_stats_<node_frontpanel_id>_<date>_<time> for vdisk statistics Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics. The node_frontpanel_id is of the node on which the statistics were collected. The date is in the form <yymmdd> and the time is in the form <hhmmss> An example of an mdisk statistics filename is Nm_stats_1_020808_105224 Example 9-70 shows some typical MDisk and the VDisk statistics file names.
Example 9-70 Filename of per node statistics IBM_2145:ITSO-CLS2:admin>svcinfo lsiostatsdumps id iostat_filename 0 Nm_stats_110775_090904_064337 1 Nv_stats_110775_090904_064337 2 Nn_stats_110775_090904_064337 3 Nm_stats_110775_090904_064437 4 Nv_stats_110775_090904_064437 5 Nn_stats_110775_090904_064437 6 Nm_stats_110775_090904_064537 7 Nv_stats_110775_090904_064537 8 Nn_stats_110775_090904_064537

Tip: You can use pscp.exe, which is installed with PuTTY, from an MS-DOS command line prompt to copy these files to local drives. WordPad can be used to open them. For example: C:\Program Files\PuTTY>pscp -load ITSO-CLS1 admin@10.64.210.242:/dumps/iostats/* c:\temp\iostats The -load parameter is used to specify the session defined in PuTTY. After you have saved your performance statistics data files, because they are in .xml format you can format and merge your data in order to get more detail about the performance in your SVC environment.

Appendix C. Performance data and statistics gathering

809

6423axc.fm

Draft Document for Review January 12, 2010 4:41 pm

An example of an unsupported tool provided as-is, is the SVC Performance Monitor svcmon Users Guide found at: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177 You could also process your statistics data with a spreadsheet application in order to get the report as shown in Figure 9-77 on page 810.

Figure 9-77 Spreadsheet example

Performance data collection and TPC-SE


Even though the performance statistics data files are readable as standard .xml files, TPC-SE (TPC for Disk) is the official and supported IBM Tool used to collect, analyze statistics data and provide a performance report for storage subsystems. TPC for Disk comes preinstalled on your SSPC Console and can be made available by activating the specific licensing for TPC for Disk. By activating this license you will upgrade your running TPC-Basic Edition to a TPC-SE edition. More information about using TPC to monitor your storage subsystem is covered in SAN Storage Performance Management Using TotalStorage Productivity Center, SG24-7364, found at: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open

810

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423axc.fm

More information about how to create a performance report based is described in the IBM TPC Reporter for Disk (Utility for anyone running IBM's TotalStorage Productivity Center) available at: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2618 TPC Reporter for Disk is being withdrawn for Tivoli Storage Productivity Center TPC version 4.1. The replacement function for this utility is packaged with TPC version 4.1 in Business Intelligence and Reporting Tools (BIRT).

Appendix C. Performance data and statistics gathering

811

6423axc.fm

Draft Document for Review January 12, 2010 4:41 pm

812

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423bibl.fm

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks
For information about ordering these publications, see How to get Redbooks on page 814. Note that some of the documents referenced here may be available in softcopy only. IBM System Storage SAN Volume Controller, SG24-6423-05 Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 IBM System Storage: Implementing an IBM SAN, SG24-6116 Introduction to Storage Area Networks, SG24-5470 SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574 SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521 Using the SVC for Business Continuity, SG24-7371 IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547 IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548

Other resources
These publications are also relevant as further information sources: IBM System Storage Open Software Family SAN Volume Controller: Planning Guide, GA22-1052 IBM System Storage Master Console: Installation and Users Guide, GC30-4090 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide, SC26-7541 IBM System Storage Open Software Family SAN Volume Controller: Service Guide, SC26-7542 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543 IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface User's Guide, SC26-7544 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference, SC26-7545 IBM TotalStorage Multipath Subsystem Device Driver User's Guide, SC30-4096 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563

Copyright IBM Corp. 2009. All rights reserved.

813

6423bibl.fm

Draft Document for Review January 12, 2010 4:41 pm

Referenced Web sites


These Web sites are also relevant as further information sources: IBM TotalStorage home page: http://www.storage.ibm.com SAN Volume Controller supported platform: http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html Download site for Windows SSH freeware: http://www.chiark.greenend.org.uk/~sgtatham/putty IBM site to download SSH for AIX: http://oss.software.ibm.com/developerworks/projects/openssh Open source site for SSH for Windows and Mac: http://www.openssh.com/windows.html Cygwin Linux-like environment for Windows: http://www.cygwin.com IBM Tivoli Storage Area Network Manager site: http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe tworkManager.html Microsoft Knowledge Base Article 131658: http://support.microsoft.com/support/kb/articles/Q131/6/58.asp Microsoft Knowledge Base Article 149927: http://support.microsoft.com/support/kb/articles/Q149/9/27.asp Sysinternals home page: http://www.sysinternals.com Subsystem Device Driver download site: http://www-1.ibm.com/servers/storage/support/software/sdd/index.html IBM TotalStorage Virtualization Home Page: http://www-1.ibm.com/servers/storage/software/virtualization/index.html

How to get Redbooks


You can search for, view, or download Redbooks, Redpapers, Technotes, draft publications and Additional materials, as well as order hardcopy Redbooks, at this Web site: ibm.com/redbooks

Help from IBM


IBM Support and downloads ibm.com/support IBM Global Services 814
SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423bibl.fm

ibm.com/services

Related publications

815

6423bibl.fm

Draft Document for Review January 12, 2010 4:41 pm

816

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423IX.fm

Index
Symbols
) 56 available managed disks 341

B
back-end application 59 background copy 299, 306, 319, 326 background copy bandwidth 331 background copy progress 419, 441 background copy rate 274275 backup 254 of data with minimal impact on production 260 backup speed 255 backup time 254 bandwidth 64, 92, 318, 585, 610 bandwidth impact 331 basic setup requirements 129 bat script 785 bind 251 bitmaps 259 boot 95 boss node 35 bottleneck 49 bottlenecks 9798, 808 budget 26 budget allowance 26 business requirements 97, 808

Numerics
64-bit kernel 56

A
abends 464 abends dump 464 access pattern 517 active quorum disk 36 active SVC cluster 562 add a new volume 168, 173 add a node 388 add additional ports 501 add an HBA 352 Add SSH Public Key 128 administration tasks 493, 558 Advanced Copy Services 90 AIX host system 182 AIX specific information 162 AIX toolbox 182 AIX-based hosts 162 alias 27 alias string 158 aliases 27 analysis 97, 655, 808 application server guidelines 89 application testing 255 assign VDisks 366 assigned VDisk 168, 173 asynchronous 308 asynchronous notifications 277278 Asynchronous Peer-to-Peer Remote Copy 308 asynchronous remote 308 asynchronous remote copy 32, 281, 308309 asynchronous replication 329 asynchronously 308 attributes 527 audit log 40 Authentication 160 authentication 41, 57, 131 authentication service 44 Autoexpand 25 automate tasks 784 automatic Linux system 224 automatic update process 225 automatically discover 340 automatically formatted 52 automatically restarted 645 automation 375 auxiliary 318, 326, 424, 446 auxiliary VDisk 309, 319, 326

C
cable connections 71 cable length 48 cache 38, 267, 309 caching 98 caching capability 97, 808 candidate node 389 capacity 87, 181 capacity information 538 capacity measurement 507 CDB 27 challenge message 30 Challenge-Handshake Authentication Protocol 30 change the IP addresses 383 Channel extender 58 channel extender 61 channels 315 CHAP 30 CHAP authentication 30, 160 CHAP secret 30, 160 check software levels 636 chpartnership 332 chrcconsistgrp 333 chrcrelationship 333 chunks 85, 683 CIM agent 39 CIM Client 38 CIMOM 28, 38, 124, 159

Copyright IBM Corp. 2009. All rights reserved.

817

6423IX.fm

Draft Document for Review January 12, 2010 4:41 pm

CLI 124, 433 commands 182 scripting for SVC task automation 375 Cluster 58 cluster 34 adding nodes 559 creation 388, 559 error log 458 IP address 112 shutting down 340, 386, 395, 550 time zone 383384 viewing properties 379, 544 cluster error log 655 Cluster management 38 cluster nodes 34 cluster overview 34 cluster partnership 289, 315 cluster properties 385 clustered ethernet port 160 clustered server resources 34 clusters 64 colliding rites 311 Colliding writes 310 Command Descriptor Block 27 command syntax 377 COMPASS architecture 46 compression 95 concepts 7 concurrent instances 682 concurrent software upgrade 449 configurable warning capacity 25 configuration 153 restoring 672 configuration node 35, 48, 58, 160, 388, 559 configuration rules 52 configure AIX 163 configure SDD 251 configuring the GUI 114 connected 293, 320321 connected state 296, 321, 323 connectivity 37 consistency 281, 310, 322 consistency freeze 296, 304, 323 Consistency Group 58 consistency group 260, 262263 limits 263 consistent 32, 294295, 321322 consistent data set 254 Consistent Stopped state 292, 319 Consistent Synchronized state 292, 320, 599, 626 ConsistentDisconnected 298, 325 ConsistentStopped 296, 323 ConsistentSynchronized 297, 324 constrained link 318 container 85 contingency capacity 25 controller, renaming 339 conventional storage 675 cookie crumbs recovery 467 cooling 65

Copied 58 copy bandwidth 92, 331 copy operation 33 copy process 304, 333 copy rate 265, 275 copy rate parameter 90 copy service 308 Copy Services managing 396, 567 COPY_COMPLETED 277 copying state 402 corruption 255 Counterpart SAN 58 counterpart SAN 58, 61, 99 CPU cycle 49 create a FlashCopy 398 create a new VDisk 505 create an SVC partnership 585, 609 create mapping command 398, 567, 569 create New Cluster 117 create SVC partnership 414, 435 creating a VDisk 354 creating managed disk groups 483 credential caching 45 current cluster state 36 Cygwin 213

D
data backup with minimal impact on production 260 moving and migration 254 data change rates 95 data consistency 308, 397 data corruption 322 data flow 74 data migration 65, 682 data migration and moving 254 data mining 255 data mover appliance 369 database log 313 database update 312 degraded mode 84 delete a FlashCopy 406 a host 352 a host port 354 a port 502 a VDisk 364365, 513, 540 ports 353 Delete consistency group command 406, 579 Delete mapping command 578 dependent writes 262, 286287, 312, 314 destaged 38 destructive 460 detect the new MDisks 340 detected 340 device specific modules 188 differentiator 50 directory protocol 44 dirty bit 299, 326

818

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423IX.fm

disconnected 293, 320321 disconnected state 321 discovering assigned VDisk 168, 173, 190 discovering newly assigned MDisks 481 disk access profile 363 disk controller renaming 478 systems 338 viewing details 338, 477 disk internal controllers 50 disk timeout value 245 disk zone 74 Diskpart 195 display summary information 341 displaying managed disks 490 distance 60, 279 distance limitations 280 DMP 248 documentation 64, 475 DSMs 188 dump I/O statistics 462 I/O trace 462 listing 460, 663 other nodes 463 durability 50 dynamic pathing 248249 dynamic shrinking 536 dynamic tracking 164

excludes 481 Execute Metro Mirror 418, 440 expand a VDisk 178, 194, 365 a volume 195 expand a space-efficient VDisk 365 expiry timestamp 44 expiry timestamps 45 extended distance solutions 279 Extent 59 extent 85, 676 extent level 676 extent sizes 85

F
fabric remote 99 fabric interconnect 60 factory WWNN 798 failover 59, 248, 309 failover only 228 failover situation 280 fan-in 59 fast fail 164 fast restore 255 FAStT 279 FC optical distance 48 feature log 662 feature, licensing 659 features, licensing 460 featurization log 461 Featurization Settings 121 Fibre Channel interfaces 47 Fibre Channel port fan in 61, 99 Fibre Channel Port Login 28 Fibre Channel port logins 59 Fibre Channel ports 71 file system 231 filtering 377, 470 filters 377 fixed error 459, 655 FlashCopy 33, 254 bitmap 264 how it works 255, 259 image mode disk 268 indirection layer 263 mapping 255 mapping events 269 rules 268 serialization of I/O 276 synthesis 275 FlashCopy indirection layer 263 FlashCopy mapping 260, 269 FlashCopy mapping states 271 Copying 272 Idling/Copied 272 Prepared 273 Preparing 272 Stopped 272 Suspended 272 Index

E
elapsed time 90 empty MDG 343 empty state 299, 326 Enterprise Storage Server (ESS) 279 entire VDisk 260 error 296, 320, 323, 343, 458, 655 Error Code 58, 646 error handling 276 Error ID 58 error log 458, 655 analyzing 655 file 645 error notification 457, 647 error number 646 error priority 656 ESS 44 ESS (Enterprise Storage Server) 279 ESS server 44 ESS to SVC 687 ESS token 44 eth0 48 eth0 port 48 eth1 48 Ethernet 71 Ethernet connection 72 event 458, 655 event log 461 events 291, 319 Excluded 59

819

6423IX.fm

Draft Document for Review January 12, 2010 4:41 pm

FlashCopy mappings 263 FlashCopy properties 263 FlashCopy rate 90 flexibility 97, 808 flush the cache 573 forced deletion 501 foreground I/O latency 331 format 506, 510, 515, 522 free extents 364 front-end application 59 FRU 59 Full Feature Phase 27

host adapter configuration settings 184 host bus adapter 348 Host ID 59 host workload 526 housekeeping 476, 543 HP-UX support information 248249

I
I/O budget 26 I/O Governing 26 I/O governing 26, 363, 517 I/O governing rate 363 I/O Group 60 I/O group 37, 60 name 473 renaming 391, 558 viewing details 391 I/O pair 67 I/O per secs 64 I/O statistics dump 462 I/O trace dump 462 ICAT 3839 identical data 318 idling 297, 324 idling state 304, 334 IdlingDisconnected 297, 325 Image Mode 60 image mode 526, 685 image mode disk 268 image mode MDisk 685 image mode to image mode 705 image mode to managed mode 700 image mode VDisk 680 image mode virtual disks 88 inappropriate zoning 82 inconsistent 294, 321 Inconsistent Copying state 292, 320 Inconsistent Stopped state 292, 319, 598599, 626 InconsistentCopying 296, 323 InconsistentDisconnected 298, 325 InconsistentStopped 296, 323 index number 666 Index/Secret/Challenge 30 indirection layer 263 indirection layer algorithm 265 informational error logs 277 initiator 158 initiator name 27 input power 386 install 63 insufficient bandwidth 275 integrity 262, 287, 313 interaction with the cache 267 intercluster communication and zoning 315 intercluster link 289, 316 intercluster link bandwidth 331 intercluster link maintenance 289290, 316 intercluster Metro Mirror 279, 308 intercluster zoning 289290, 316 Internet Storage Name Service 30, 59, 159

G
gateway IP address 112 GBICs 60 general housekeeping 476, 543 generating output 378 generator 127 geographically dispersed 279 Global Mirror guidelines 93 Global Mirror protocol 32 Global Mirror relationship 312 Global Mirror remote copy technique 308 GM 308 gminterdelaysimulation 328 gmintradelaysimulation 328 gmlinktolerance 328329 governing 26 governing rate 26 governing throttle 517 graceful manner 390 grain 59, 264, 276 grain sizes 90 grains 90, 275 granularity 260 GUI 114, 130

H
Hardware Management Console 39 hardware nodes 46, 56 hardware overview 46 hash function 30 HBA 59, 84, 348 HBA fails 84 HBA ports 89 heartbeat signal 37 heartbeat traffic 92 help 475, 543 high availability 34, 64 home directory 182 host and application server guidelines 89 configuration 153 creating 348 deleting 500 information 494 showing 373 systems 74

820

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423IX.fm

interswitch link (ISL) 61 interval 385 intracluster Metro Mirror 279, 308 IP address modifying 383, 544 IP addresses 6465, 544 IP subnet 72 ipconfig 136 IPv4 135 ipv4 and 48 IPv4 stack 140 IPv6 135 IPv6 address 139 IPv6 addresses 136 IPv6 connectivity 137 IQN 27, 59, 158 iSCSI 26, 48, 64, 159 iSCSI Address 27 iSCSI client 158 iSCSI IP address failover 160 iSCSI Multipathing 31 iSCSI Name 27 iSCSI node 27 iSCSI protocol 56 iSCSI Qualified Name 27, 59 iSCSI support 5657 iSCSI target node failover 160 ISL (interswitch link) 61 ISL hop count 279, 308 iSNS 30, 59, 159 issue CLI commands 213 ivp6 48

list the dumps 663 listing dumps 460, 663 Load balancing 228 Local authentication 40 local cluster 301, 328 Local fabric 60 local fabric interconnect 60 Local users 42 log 313 logged 458 Logical Block Address 299, 326 logical configuration data 464 logical unit numbers 341 Login Phase 27 logs 313 lsrcrelationshipcandidate 332 LU 60 LUNs 60

M
magnetic disks 49 maintenance levels 184 maintenance procedures 645 maintenance tasks 448, 636 Managed 60 Managed disk 60 managed disk 60, 479 displaying 490 working with 477 managed disk group 345 creating 483 viewing 486 Managed Disks 60 managed mode MDisk 685 managed mode to image mode 702 managed mode virtual disk 88 management 97, 808 map a VDisk 515 map a VDisk to a host 366 mapping 259 mapping events 269 mapping state 269 Master 60 master 318, 326 master console 65, 71 master VDisk 319, 326 maximum supported configurations 57 MC 60 MD5 checksum hash 30 MDG 60 MDG information 538 MDG level 344 MDGs 65 MDisk 60, 64, 479, 490 adding 343, 488 discovering 340, 481 including 343, 481 information 479 modes 685 name parameter 341 Index

J
Jumbo Frames 30

K
kernel level 225 key 160 key files on AIX 182

L
LAN Interfaces 48 last extent 686 latency 32, 92 LBA 299, 326 license 112 license feature 659 licensing feature 460 licensing feature settings 460, 659 limiting factor 97, 808 link errors 47 Linux 182 Linux kernel 35 Linux on Intel 224 list dump 460 list of MDisks 491 list of VDisks 492

821

6423IX.fm

Draft Document for Review January 12, 2010 4:41 pm

removing 347, 489 renaming 342, 480 showing 371, 491, 537 showing in group 344 MDisk group creating 346, 483 deleting 347, 487 name 473 renaming 346, 486 showing 344, 372, 482, 538 viewing information 346 MDiskgrp 60 Metro Mirror 279 Metro Mirror consistency group 302303, 305306, 332336 Metro Mirror features 281, 309 Metro Mirror process 290, 317 Metro Mirror relationship 302, 304, 306, 311, 332333, 336, 597, 624 microcode 37 Microsoft Active Directory 43 Microsoft Cluster 194 Microsoft Multi Path Input Output 188 migrate 675 migrate a VDisk 680 migrate between MDGs 680 migrate data 685 migrate VDisks 368 migrating multiple extents 676 migration algorithm 683 functional overview 682 operations 676 overview 676 tips 687 migration activities 676 migration phase 526 migration process 369 migration progress 681 migration threads 676 mirrored 309 mirrored copy 308 mirrored VDisks 53 mkpartnership 331 mkrcconsistgrp 332 mkrcrelationship 332 MLC 49 modify a host 351 modifying a VDisk 362 mount 231 mount point 231 moving and migrating data 254 MPIO 89, 188 MSCS 194 MTU sizes 30, 159 multi layer cell 49 multipath configuration 165 multipath I/O 89 multipath storage solution 188 multipathing device driver 89

Multipathing drivers 31 multiple disk arrays 97, 808 multiple extents 676 multiple paths 31 multiple virtual machines 239

N
network bandwidth 95 Network Entity 158 Network Portals 158 new code 645 new disks 170, 176 new mapping 366 Node 61 node 35, 60, 387 adding 388 adding to cluster 559 deleting 390 failure 276 port 59 renaming 389 shutting down 390 using the GUI 558 viewing details 387 node details 387 node discovery 666 node dumps 463 node level 387 Node Unique ID 35 nodes 64 non-preferred path 248 non-redundant 58 non-zero contingency 25 N-port 61

O
offline rules 679 offload features 30 older disk systems 98 on screen content 377, 470, 543 online help 475, 543 on-screen content 377 OpenSSH 182 OpenSSH client 213 operating system versions 184 ordering 32, 262 organizing on-screen content 377 other node dumps 463 overall performance needs 64 Oversubscription 61 oversubscription 61 overwritten 259, 455

P
package numbering and version 448, 636 parallelism 682 partial extents 25 partial last extent 686

822

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423IX.fm

partnership 289, 315, 328 passphrase 127 path failover 248 path failure 277 path offline 277 path offline for source VDisk 277 path offline for target VDisk 277 path offline state 277 path-selection policy algorithms 228 peak 331 peak workload 92 pended 26 per cluster 682 per managed disk 682 performance 87 performance advantage 97, 808 performance boost 45 performance considerations 808 performance improvement 97, 808 performance monitoring tool 93 performance requirements 64 performance scalability 34 performance statistics 93 performance throttling 517 physical location 65 physical planning 65 physical rules 67 physical site 65 Physical Volume Links 249 PiT 34 PiT consistent data 254 PiT copy 264 PiT semantics 262 planning rules 64 plink 784 PLOGI 28 Point in Time 34 point in time 34 point-in-time copy 295, 322 policy decision 300, 327 port adding 352, 501 deleting 353, 502 port binding 251 port mask 8990 Power Systems 182 Powerware 67 PPRC background copy 299, 306, 326 commands 301, 327 configuration limits 327 detailed states 295, 323 preferred access node 87 preferred path 248 pre-installation planning 64 Prepare 61 prepare (pre-trigger) FlashCopy mapping command 400 PREPARE_COMPLETED 277 preparing volumes 173, 178 pre-trigger 400

primary 309, 424, 446 primary copy 326 priority 368 priority setting 368 private key 124, 127, 182, 784 production VDisk 326 provisioning 331 pseudo device driver 165 public key 124, 127, 182, 784 PuTTY 39, 124, 129, 387 CLI session 133 default location 127 security alert 134 PuTTY application 133, 390 PuTTY Installation 213 PuTTY Key Generator 127128 PuTTY Key Generator GUI 125 PuTTY Secure Copy 451 PuTTY session 127, 134 PuTTY SSH client software 213 PVLinks 249

Q
QLogic HBAs 225 Queue Full Condition 26 quiesce 387 quiesce time 573 quiesced 804 quorum 35 quorum candidates 36 Quorum Disk 35 quorum disk 36, 666 setting 666 quorum disk candidate 36 quorum disks 25

R
RAID 61 RAID controller 74 RAMAC 49 RAS 61 read workload 52 real capacity 25 real-time synchronized 279280 reassign the VDisk 368 recall commands 338, 377 recommended levels 636 Redbooks Web site 814 Contact us xxvii redundancy 48, 92 redundant 58 Redundant SAN 61 redundant SAN 61 redundant SVC 563 relationship 260, 308, 318 relationship state diagram 291, 319 reliability 87 Reliability Availability and Serviceability 61 Remote 61

Index

823

6423IX.fm

Draft Document for Review January 12, 2010 4:41 pm

Remote authentication 40 remote cluster 60 remote fabric 60, 99 interconnect 60 Remote users 43 remove a disk 210 remove a VDisk 182 remove an MDG 347 remove WWPN definitions 353 rename a disk controller 478 rename an MDG 486 rename an MDisk 480 renaming an I/O group 558 repartitioning 87 rescan disks 192 restart the cluster 387 restart the node 391 restarting 422, 445 restore points 256 restore procedure 672 Reverse FlashCopy 34, 256 reverse FlashCopy 56 RFC3720 27 rmrcconsistgrp 335 rmrcrelationship 335 round robin 87, 228, 248

S
sample script 787 SAN Boot Support 248, 250 SAN definitions 99 SAN fabric 74 SAN planning 71 SAN Volume Controller 61 documentation 475 general housekeeping 476, 543 help 475, 543 virtualization 38 SAN Volume Controller (SVC) 60 SAN zoning 123 SATA 94 scalable 98, 808 scalable architecture 51 SCM 50 scripting 300, 327, 375 scripts 195, 783 SCSI 61 SCSI Disk 60 SCSI primitives 340 SDD 89, 165, 172, 177, 250 SDD (Subsystem Device Driver) 172, 177, 225, 250, 689 SDD Dynamic Pathing 248 SDD installation 166 SDD package version 165, 186 SDDDSM 188 secondary 309 secondary copy 326 secondary site 64 secure data flow 123 secure session 390

Secure Shell (SSH) 124 Secure Shell connection 38 separate physical IP networks 48 sequential 88, 354, 506, 510, 522, 532 serial numbers 168, 175 serialization 276 serialization of I/O by FlashCopy 276 Service Location Protocol 30, 61, 159 service, maintenance using the GUI 635 set attributes 527 set the cluster time zone 549 set up Metro Mirror 412, 433, 583, 608 SEV 56, 363 shells 375 show the MDG 538 show the MDisks 537 shrink a VDisk 536 shrinking 536 shrinkvdisksize 369 shut down 194 shut down a single node 390 shut down the cluster 386, 550 Simple Network Management Protocol 300, 327, 343 single layer cell 49 single point of failure 61 single sign on 57 single sign-on 39, 44 site 65 SLC 49 SLP 30, 61, 159 SLP daemon 30 SNIA 2 SNMP 300, 327, 343 SNMP alerts 481 SNMP manager 457 SNMP trap 277 software upgrade 448, 636, 638 software upgrade packages 636 Solid State Disk 56 Solid State Drive 35 Solid State Drives 46 solution 97 sort 473 sort criteria 473 sorting 473 source 275, 326 space-efficient 357 Space-efficient background copy 317 space-efficient VDisk 370, 526 space-efficient VDisks 509 Space-Efficient Virtual Disk 56 space-efficient volume 369 special migration 686 split per second 90 splitting the SAN 61 SPoF 61 spreading the load 87 SSD 51 SSD market 50 SSD solution 50

824

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423IX.fm

SSD storage 52 SSH 38, 123, 784 SSH (Secure Shell) 124 SSH Client 39 SSH client 182, 213 SSH client software 124 SSH key 41 SSH keys 124, 129 SSH server 123124 SSH-2 124 SSO 44 SSPC 39, 62 stack 684 stand-alone Metro Mirror relationship 417, 439 start (trigger) FlashCopy mapping command 402403, 574575 start a PPRC relationship command 304, 333334 startrcrelationship 333 state 295296, 323 connected 293, 320 consistent 294295, 321322 ConsistentDisconnected 298, 325 ConsistentStopped 296, 323 ConsistentSynchronized 297, 324 disconnected 293, 320 empty 299, 326 idling 297, 324 IdlingDisconnected 297, 325 inconsistent 294, 321 InconsistentCopying 296, 323 InconsistentDisconnected 298, 325 InconsistentStopped 296, 323 overview 291, 320 synchronized 295, 322 state fragments 294, 321 state overview 293, 327 state transitions 277, 320 states 269, 275, 291, 319 statistics 385 statistics collection 546 starting 546 stopping 385, 547 statistics dump 462 stop 320 stop FlashCopy consistency group 405, 576 stop FlashCopy mapping command 404 STOP_COMPLETED 278 stoprcconsistgrp 334 stoprcrelationship 334 storage cache 37 storage capacity 64 Storage Class Memory 50 stripe VDisks 97, 808 striped 506, 510, 522, 532 striped VDisk 354 subnet mask IP address 112 Subsystem Device Driver (SDD) 172, 177, 225, 250, 689 Subsystem Device Driver DSM 188 SUN Solaris support information 248 superuser 380

surviving node 390 suspended mapping 404 SVC basic installation 109 task automation 375 SVC cluster 559 SVC cluster candidates 585, 610 SVC cluster partnership 301, 328 SVC cluster software 639 SVC configuration 64 backing up 668 deleting the backup 672 restoring 672 SVC Console 39 SVC device 62 SVC GUI 39 SVC installations 84 SVC master console 124 SVC node 37, 84 SVC PPRC functions 281 SVC setup 154 SVC SSD storage 52 SVC superuser 41 svcinfo 338, 342, 377 svcinfo lsfreeextents 681 svcinfo lshbaportcandidate 352 svcinfo lsmdiskextent 681 svcinfo lsmigrate 681 svcinfo lsVDisk 371 svcinfo lsVDiskextent 681 svcinfo lsVDiskmember 371 svctask 338, 342, 377, 379 svctask chlicense 460 svctask finderr 454 svctask mkfcmap 301304, 328, 331334, 398, 567, 569 switching copy direction 424, 446, 606, 632 switchrcconsistgrp 336 switchrcrelationship 336 symmetrical 1 symmetrical network 61 symmetrical virtualization 1 synchronized 295, 318, 322 synchronized clocks 45 synchronizing 318 synchronous data mirroring 56 synchronous reads 684 synchronous writes 684 synthesis 275 Syslog error event logging 57 System Storage Productivity Center 62

T
T0 34 target 158, 326 target name 27 test new applications 255 threads parameter 519 threshold level 26 throttles 517 throttling parameters 517 Index

825

6423IX.fm

Draft Document for Review January 12, 2010 4:41 pm

tie breaker 36 tie-break situations 36 tie-break solution 666 tie-breaker 36 time 383 time zone 383384 timeout 245 timestamp 4445 Time-Zero copy 34 Tivoli Directory Server 43 Tivoli Embedded Security Services 40, 44 Tivoli Integrated Portal 39 Tivoli Storage Productivity Center 39 Tivoli Storage Productivity Center for Data 39 Tivoli Storage Productivity Center for Disk 39 Tivoli Storage Productivity Center for Replication 39 Tivoli Storage Productivity Center Standard Edition 39 token 4445 token expiry timestamp 45 token facility 44 trace dump 462 traffic 92 traffic profile activity 64 transitions 685 trigger 402403, 574575

U
unallocated capacity 197 unallocated region 317 unassign 514 unconfigured nodes 388 undetected data corruption 322 unfixed error 459, 655 uninterruptible power supply 67, 71, 84, 386 unmanaged MDisk 685 unmap a VDisk 368 up2date 224 updates 224 upgrade 636637 upgrade precautions 449 upgrading software 636 use of Metro Mirror 299, 326 used capacity 25 used free capacity 25 User account migration 39 using SDD 172, 177, 225, 250

mapped to this host 367 migrating 89, 368, 518 modifying 362, 517 path offline for source 277 path offline for target 277 showing 492 showing for MDisk 370, 482 showing map to a host 539 showing using group 371 shrinking 369, 519 working with 354 VDisk discovery 159 VDisk mirror 526 VDisk Mirroring 52 VDisk-to-host mapping 368 deleting 514 Veritas Volume Manager 248 View I/O Group details 391 viewing managed disk groups 486 virtual disk 260, 354, 466, 504 Virtual Machine File System 239 virtualization 38 Virtualization Limit 121 VLUN 60 VMFS 239241 VMFS datastore 243 volume group 178 Voting Set 35 voting set 3536 vpath configured 170, 176

W
warning capacity 25 warning threshold 370 Web interface 251 Windows 2000 based hosts 183 Windows 2000 host configuration 183, 237 Windows 2003 188 Windows host system CLI 213 Windows NT and 2000 specific information 183 working with managed disks 477 workload cycle 93 worldwide node name 798 worldwide port name 165 Write data 38 Write ordering 322 write ordering 285, 312, 322 write through mode 84 write workload 93 writes 313 write-through mode 38 WWNN 798 WWPNs 165, 348, 353, 497

V
VDisk 490 assigning 515 assigning to host 366 creating 354, 356, 505 creating in image mode 357, 526 deleting 364, 509, 513 discovering assigned 168, 173, 190 expanding 365 I/O governing 362 image mode migration concept 685 information 356, 504

Y
YaST Online Update 224

826

SAN Volume Controller V5.1

Draft Document for Review January 12, 2010 4:41 pm

6423IX.fm

Z
zero buffer 317 zero contingency 25 Zero Detection 56 zero-detection algorithm 25 zone 74 zoning capabilities 74 zoning recommendation 193, 207

Index

827

6423IX.fm

Draft Document for Review January 12, 2010 4:41 pm

828

SAN Volume Controller V5.1

To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the

Conditional Text Settings (ONLY!) to the book files.


Draft Document for Review January 12, 2010 4:41 pm

6423spine.fm

829

SAN Volume Controller V5.1

(1.5 spine) 1.5<-> 1.998 789 <->1051 pages

SAN Volume Controller V5.1

(1.0 spine) 0.875<->1.498 460 <-> 788 pages

SAN Volume Controller V5.1

(0.5 spine) 0.475<->0.873 250 <-> 459 pages

SAN Volume Controller V5.1

(0.2spine) 0.17<->0.473 90<->249 pages

(0.1spine) 0.1<->0.169 53<->89 pages

To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the

Conditional Text Settings (ONLY!) to the book files.


Draft Document for Review January 12, 2010 4:41 pm

6423spine.fm

830

SAN Volume Controller V5.1

(2.5 spine) 2.5<->nnn.n 1315<-> nnnn pages

SAN Volume Controller V5.1

(2.0 spine) 2.0 <-> 2.498 1052 <-> 1314 pages

Draft Document for Review January 12, 2010 12:00 pm

Back cover

Implementing the IBM System Storage SAN Volume Controller V5.1


Install, use, and troubleshoot the SAN Volume Controller Learn about and how to attach iSCSI hosts Understand what SSD has to offer
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC), a virtualization appliance solution that maps virtualized volumes visible to hosts and applications to physical volumes on storage devices. Each server within the SAN has its own set of virtual storage addresses, which are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. This means that volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves management of information at the block level in a network, enabling applications and servers to share storage devices on a network.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-6423-07 ISBN

You might also like